00:00:00.001 Started by upstream project "autotest-per-patch" build number 126178 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.238 > git --version # 'git version 2.39.2' 00:00:00.238 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.275 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.275 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.013 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.025 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.041 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.041 > git config core.sparsecheckout # timeout=10 00:00:06.053 > git read-tree -mu HEAD # timeout=10 00:00:06.076 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.121 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.121 > git rev-list --no-walk d49304e16352441ae7eebb2419125dd094201f3e # timeout=10 00:00:06.231 [Pipeline] Start of Pipeline 00:00:06.247 [Pipeline] library 00:00:06.249 Loading library shm_lib@master 00:00:06.249 Library shm_lib@master is cached. Copying from home. 00:00:06.266 [Pipeline] node 00:00:06.274 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.276 [Pipeline] { 00:00:06.290 [Pipeline] catchError 00:00:06.291 [Pipeline] { 00:00:06.304 [Pipeline] wrap 00:00:06.313 [Pipeline] { 00:00:06.320 [Pipeline] stage 00:00:06.321 [Pipeline] { (Prologue) 00:00:06.548 [Pipeline] sh 00:00:06.833 + logger -p user.info -t JENKINS-CI 00:00:06.856 [Pipeline] echo 00:00:06.857 Node: GP8 00:00:06.867 [Pipeline] sh 00:00:07.176 [Pipeline] setCustomBuildProperty 00:00:07.185 [Pipeline] echo 00:00:07.186 Cleanup processes 00:00:07.190 [Pipeline] sh 00:00:07.467 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.467 3181093 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.478 [Pipeline] sh 00:00:07.759 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.759 ++ awk '{print $1}' 00:00:07.759 ++ grep -v 'sudo pgrep' 00:00:07.759 + sudo kill -9 00:00:07.759 + true 00:00:07.770 [Pipeline] cleanWs 00:00:07.778 [WS-CLEANUP] Deleting project workspace... 00:00:07.778 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.878 [WS-CLEANUP] done 00:00:07.884 [Pipeline] setCustomBuildProperty 00:00:07.900 [Pipeline] sh 00:00:08.203 + sudo git config --global --replace-all safe.directory '*' 00:00:08.274 [Pipeline] httpRequest 00:00:08.310 [Pipeline] echo 00:00:08.311 Sorcerer 10.211.164.101 is alive 00:00:08.317 [Pipeline] httpRequest 00:00:08.321 HttpMethod: GET 00:00:08.321 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.322 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.337 Response Code: HTTP/1.1 200 OK 00:00:08.337 Success: Status code 200 is in the accepted range: 200,404 00:00:08.337 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.453 [Pipeline] sh 00:00:12.741 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.757 [Pipeline] httpRequest 00:00:12.773 [Pipeline] echo 00:00:12.775 Sorcerer 10.211.164.101 is alive 00:00:12.783 [Pipeline] httpRequest 00:00:12.787 HttpMethod: GET 00:00:12.787 URL: http://10.211.164.101/packages/spdk_6151edad3baa701a41f1867f128dcf8b1042d56c.tar.gz 00:00:12.791 Sending request to url: http://10.211.164.101/packages/spdk_6151edad3baa701a41f1867f128dcf8b1042d56c.tar.gz 00:00:12.810 Response Code: HTTP/1.1 200 OK 00:00:12.810 Success: Status code 200 is in the accepted range: 200,404 00:00:12.811 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6151edad3baa701a41f1867f128dcf8b1042d56c.tar.gz 00:00:55.460 [Pipeline] sh 00:00:55.750 + tar --no-same-owner -xf spdk_6151edad3baa701a41f1867f128dcf8b1042d56c.tar.gz 00:00:58.299 [Pipeline] sh 00:00:58.584 + git -C spdk log --oneline -n5 00:00:58.584 6151edad3 test/check_so_deps: Simplify check_header_filenames() 00:00:58.584 44e72e4e7 autopackage: Rename autopackage.sh to release_build.sh 00:00:58.584 255871c19 autopackage: Move core of the script to autobuild 00:00:58.584 bd4841ef7 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:00:58.584 719d03c6a sock/uring: only register net impl if supported 00:00:58.598 [Pipeline] } 00:00:58.615 [Pipeline] // stage 00:00:58.626 [Pipeline] stage 00:00:58.629 [Pipeline] { (Prepare) 00:00:58.651 [Pipeline] writeFile 00:00:58.671 [Pipeline] sh 00:00:58.955 + logger -p user.info -t JENKINS-CI 00:00:58.970 [Pipeline] sh 00:00:59.253 + logger -p user.info -t JENKINS-CI 00:00:59.268 [Pipeline] sh 00:00:59.559 + cat autorun-spdk.conf 00:00:59.559 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.559 SPDK_TEST_NVMF=1 00:00:59.559 SPDK_TEST_NVME_CLI=1 00:00:59.559 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.559 SPDK_TEST_NVMF_NICS=e810 00:00:59.559 SPDK_TEST_VFIOUSER=1 00:00:59.559 SPDK_RUN_UBSAN=1 00:00:59.559 NET_TYPE=phy 00:00:59.567 RUN_NIGHTLY=0 00:00:59.573 [Pipeline] readFile 00:00:59.604 [Pipeline] withEnv 00:00:59.607 [Pipeline] { 00:00:59.619 [Pipeline] sh 00:00:59.923 + set -ex 00:00:59.924 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.924 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.924 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.924 ++ SPDK_TEST_NVMF=1 00:00:59.924 ++ SPDK_TEST_NVME_CLI=1 00:00:59.924 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.924 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.924 ++ SPDK_TEST_VFIOUSER=1 00:00:59.924 ++ SPDK_RUN_UBSAN=1 00:00:59.924 ++ NET_TYPE=phy 00:00:59.924 ++ RUN_NIGHTLY=0 00:00:59.924 + case $SPDK_TEST_NVMF_NICS in 00:00:59.924 + DRIVERS=ice 00:00:59.924 + [[ tcp == \r\d\m\a ]] 00:00:59.924 + [[ -n ice ]] 00:00:59.924 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.924 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.924 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.924 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.924 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.924 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.924 + true 00:00:59.924 + for D in $DRIVERS 00:00:59.924 + sudo modprobe ice 00:00:59.924 + exit 0 00:00:59.935 [Pipeline] } 00:00:59.955 [Pipeline] // withEnv 00:00:59.960 [Pipeline] } 00:00:59.978 [Pipeline] // stage 00:00:59.987 [Pipeline] catchError 00:00:59.988 [Pipeline] { 00:01:00.000 [Pipeline] timeout 00:01:00.000 Timeout set to expire in 50 min 00:01:00.001 [Pipeline] { 00:01:00.016 [Pipeline] stage 00:01:00.017 [Pipeline] { (Tests) 00:01:00.033 [Pipeline] sh 00:01:00.320 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.320 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.320 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.320 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.320 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.320 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.320 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.320 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.320 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.320 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.320 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.320 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.320 + source /etc/os-release 00:01:00.320 ++ NAME='Fedora Linux' 00:01:00.320 ++ VERSION='38 (Cloud Edition)' 00:01:00.320 ++ ID=fedora 00:01:00.320 ++ VERSION_ID=38 00:01:00.320 ++ VERSION_CODENAME= 00:01:00.320 ++ PLATFORM_ID=platform:f38 00:01:00.320 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:00.320 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.320 ++ LOGO=fedora-logo-icon 00:01:00.320 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:00.320 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.320 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:00.320 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.320 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.320 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.320 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:00.320 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.320 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:00.320 ++ SUPPORT_END=2024-05-14 00:01:00.320 ++ VARIANT='Cloud Edition' 00:01:00.320 ++ VARIANT_ID=cloud 00:01:00.320 + uname -a 00:01:00.320 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:00.320 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:01.694 Hugepages 00:01:01.694 node hugesize free / total 00:01:01.694 node0 1048576kB 0 / 0 00:01:01.694 node0 2048kB 0 / 0 00:01:01.694 node1 1048576kB 0 / 0 00:01:01.694 node1 2048kB 0 / 0 00:01:01.694 00:01:01.694 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:01.694 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:01.694 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:01.694 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:01.694 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.694 + rm -f /tmp/spdk-ld-path 00:01:01.694 + source autorun-spdk.conf 00:01:01.694 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.694 ++ SPDK_TEST_NVMF=1 00:01:01.694 ++ SPDK_TEST_NVME_CLI=1 00:01:01.694 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.694 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.694 ++ SPDK_TEST_VFIOUSER=1 00:01:01.694 ++ SPDK_RUN_UBSAN=1 00:01:01.694 ++ NET_TYPE=phy 00:01:01.694 ++ RUN_NIGHTLY=0 00:01:01.694 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.694 + [[ -n '' ]] 00:01:01.694 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.694 + for M in /var/spdk/build-*-manifest.txt 00:01:01.694 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.694 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.694 + for M in /var/spdk/build-*-manifest.txt 00:01:01.694 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.694 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.694 ++ uname 00:01:01.694 + [[ Linux == \L\i\n\u\x ]] 00:01:01.694 + sudo dmesg -T 00:01:01.694 + sudo dmesg --clear 00:01:01.694 + dmesg_pid=3181770 00:01:01.694 + [[ Fedora Linux == FreeBSD ]] 00:01:01.694 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.694 + sudo dmesg -Tw 00:01:01.694 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.694 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.694 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.694 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.694 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.694 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.694 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.694 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.694 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.694 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.694 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.694 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.694 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.694 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.694 Test configuration: 00:01:01.694 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.694 SPDK_TEST_NVMF=1 00:01:01.694 SPDK_TEST_NVME_CLI=1 00:01:01.694 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.694 SPDK_TEST_NVMF_NICS=e810 00:01:01.694 SPDK_TEST_VFIOUSER=1 00:01:01.694 SPDK_RUN_UBSAN=1 00:01:01.694 NET_TYPE=phy 00:01:01.694 RUN_NIGHTLY=0 12:40:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.694 12:40:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.695 12:40:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.695 12:40:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.695 12:40:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.695 12:40:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.695 12:40:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.695 12:40:19 -- paths/export.sh@5 -- $ export PATH 00:01:01.695 12:40:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.695 12:40:19 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.695 12:40:19 -- common/autobuild_common.sh@473 -- $ date +%s 00:01:01.695 12:40:19 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721040019.XXXXXX 00:01:01.695 12:40:19 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721040019.VLCIsN 00:01:01.695 12:40:19 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:01:01.695 12:40:19 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:01:01.695 12:40:19 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.695 12:40:19 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.695 12:40:19 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.695 12:40:19 -- common/autobuild_common.sh@489 -- $ get_config_params 00:01:01.695 12:40:19 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:01.695 12:40:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.695 12:40:19 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.695 12:40:19 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:01:01.695 12:40:19 -- pm/common@17 -- $ local monitor 00:01:01.695 12:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.695 12:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.695 12:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.695 12:40:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.695 12:40:19 -- pm/common@21 -- $ date +%s 00:01:01.695 12:40:19 -- pm/common@21 -- $ date +%s 00:01:01.695 12:40:19 -- pm/common@25 -- $ sleep 1 00:01:01.695 12:40:19 -- pm/common@21 -- $ date +%s 00:01:01.695 12:40:19 -- pm/common@21 -- $ date +%s 00:01:01.695 12:40:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040019 00:01:01.695 12:40:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040019 00:01:01.695 12:40:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040019 00:01:01.695 12:40:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721040019 00:01:01.695 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040019_collect-vmstat.pm.log 00:01:01.695 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040019_collect-cpu-load.pm.log 00:01:01.695 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040019_collect-cpu-temp.pm.log 00:01:01.695 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721040019_collect-bmc-pm.bmc.pm.log 00:01:02.633 12:40:20 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:01:02.633 12:40:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.633 12:40:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.633 12:40:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.633 12:40:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.633 Mon Jul 15 10:40:20 AM UTC 2024 00:01:02.633 12:40:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.633 v24.09-pre-206-g6151edad3 00:01:02.633 12:40:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.633 12:40:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.633 12:40:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.633 12:40:20 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:02.633 12:40:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.633 12:40:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.633 ************************************ 00:01:02.633 START TEST ubsan 00:01:02.633 ************************************ 00:01:02.633 12:40:20 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:02.633 using ubsan 00:01:02.633 00:01:02.633 real 0m0.000s 00:01:02.633 user 0m0.000s 00:01:02.633 sys 0m0.000s 00:01:02.633 12:40:20 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.633 12:40:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.633 ************************************ 00:01:02.633 END TEST ubsan 00:01:02.633 ************************************ 00:01:02.892 12:40:20 -- common/autotest_common.sh@1142 -- $ return 0 00:01:02.892 12:40:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.892 12:40:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.892 12:40:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.892 12:40:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:02.892 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.892 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:03.150 Using 'verbs' RDMA provider 00:01:13.722 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:23.706 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.706 Creating mk/config.mk...done. 00:01:23.706 Creating mk/cc.flags.mk...done. 00:01:23.706 Type 'make' to build. 00:01:23.706 12:40:41 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:23.706 12:40:41 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.706 12:40:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.706 12:40:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.706 ************************************ 00:01:23.706 START TEST make 00:01:23.706 ************************************ 00:01:23.706 12:40:41 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:23.706 make[1]: Nothing to be done for 'all'. 00:01:25.098 The Meson build system 00:01:25.098 Version: 1.3.1 00:01:25.098 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:25.098 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.098 Build type: native build 00:01:25.098 Project name: libvfio-user 00:01:25.098 Project version: 0.0.1 00:01:25.098 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:25.098 C linker for the host machine: cc ld.bfd 2.39-16 00:01:25.098 Host machine cpu family: x86_64 00:01:25.098 Host machine cpu: x86_64 00:01:25.098 Run-time dependency threads found: YES 00:01:25.098 Library dl found: YES 00:01:25.098 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:25.098 Run-time dependency json-c found: YES 0.17 00:01:25.098 Run-time dependency cmocka found: YES 1.1.7 00:01:25.098 Program pytest-3 found: NO 00:01:25.098 Program flake8 found: NO 00:01:25.098 Program misspell-fixer found: NO 00:01:25.098 Program restructuredtext-lint found: NO 00:01:25.098 Program valgrind found: YES (/usr/bin/valgrind) 00:01:25.098 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:25.098 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:25.098 Compiler for C supports arguments -Wwrite-strings: YES 00:01:25.098 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.098 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:25.098 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:25.098 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:25.098 Build targets in project: 8 00:01:25.098 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:25.098 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:25.098 00:01:25.098 libvfio-user 0.0.1 00:01:25.098 00:01:25.098 User defined options 00:01:25.098 buildtype : debug 00:01:25.098 default_library: shared 00:01:25.098 libdir : /usr/local/lib 00:01:25.098 00:01:25.098 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.897 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:26.159 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:26.159 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:26.159 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:26.159 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:26.159 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:26.159 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:26.426 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:26.426 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:26.426 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:26.426 [10/37] Compiling C object samples/null.p/null.c.o 00:01:26.426 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:26.426 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:26.426 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:26.426 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:26.426 [15/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:26.426 [16/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:26.426 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:26.426 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:26.426 [19/37] Compiling C object samples/server.p/server.c.o 00:01:26.426 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:26.426 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:26.426 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:26.426 [23/37] Compiling C object samples/client.p/client.c.o 00:01:26.426 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:26.426 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:26.426 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:26.426 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:26.426 [28/37] Linking target samples/client 00:01:26.426 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:26.426 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:26.685 [31/37] Linking target test/unit_tests 00:01:26.685 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:26.685 [33/37] Linking target samples/server 00:01:26.685 [34/37] Linking target samples/null 00:01:26.685 [35/37] Linking target samples/gpio-pci-idio-16 00:01:26.685 [36/37] Linking target samples/lspci 00:01:26.685 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:26.685 INFO: autodetecting backend as ninja 00:01:26.685 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.943 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.521 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.521 ninja: no work to do. 00:01:32.826 The Meson build system 00:01:32.826 Version: 1.3.1 00:01:32.826 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:32.826 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:32.826 Build type: native build 00:01:32.826 Program cat found: YES (/usr/bin/cat) 00:01:32.826 Project name: DPDK 00:01:32.826 Project version: 24.03.0 00:01:32.826 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:32.826 C linker for the host machine: cc ld.bfd 2.39-16 00:01:32.826 Host machine cpu family: x86_64 00:01:32.826 Host machine cpu: x86_64 00:01:32.826 Message: ## Building in Developer Mode ## 00:01:32.826 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:32.826 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:32.826 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:32.826 Program python3 found: YES (/usr/bin/python3) 00:01:32.826 Program cat found: YES (/usr/bin/cat) 00:01:32.826 Compiler for C supports arguments -march=native: YES 00:01:32.826 Checking for size of "void *" : 8 00:01:32.826 Checking for size of "void *" : 8 (cached) 00:01:32.826 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:32.826 Library m found: YES 00:01:32.826 Library numa found: YES 00:01:32.826 Has header "numaif.h" : YES 00:01:32.826 Library fdt found: NO 00:01:32.826 Library execinfo found: NO 00:01:32.826 Has header "execinfo.h" : YES 00:01:32.826 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:32.826 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:32.826 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:32.826 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:32.826 Run-time dependency openssl found: YES 3.0.9 00:01:32.826 Run-time dependency libpcap found: YES 1.10.4 00:01:32.826 Has header "pcap.h" with dependency libpcap: YES 00:01:32.826 Compiler for C supports arguments -Wcast-qual: YES 00:01:32.826 Compiler for C supports arguments -Wdeprecated: YES 00:01:32.826 Compiler for C supports arguments -Wformat: YES 00:01:32.826 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:32.826 Compiler for C supports arguments -Wformat-security: NO 00:01:32.826 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:32.826 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:32.826 Compiler for C supports arguments -Wnested-externs: YES 00:01:32.826 Compiler for C supports arguments -Wold-style-definition: YES 00:01:32.826 Compiler for C supports arguments -Wpointer-arith: YES 00:01:32.826 Compiler for C supports arguments -Wsign-compare: YES 00:01:32.826 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:32.826 Compiler for C supports arguments -Wundef: YES 00:01:32.826 Compiler for C supports arguments -Wwrite-strings: YES 00:01:32.826 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:32.826 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:32.826 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:32.826 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:32.826 Program objdump found: YES (/usr/bin/objdump) 00:01:32.826 Compiler for C supports arguments -mavx512f: YES 00:01:32.826 Checking if "AVX512 checking" compiles: YES 00:01:32.826 Fetching value of define "__SSE4_2__" : 1 00:01:32.826 Fetching value of define "__AES__" : 1 00:01:32.826 Fetching value of define "__AVX__" : 1 00:01:32.826 Fetching value of define "__AVX2__" : (undefined) 00:01:32.826 Fetching value of define "__AVX512BW__" : (undefined) 00:01:32.826 Fetching value of define "__AVX512CD__" : (undefined) 00:01:32.826 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:32.826 Fetching value of define "__AVX512F__" : (undefined) 00:01:32.826 Fetching value of define "__AVX512VL__" : (undefined) 00:01:32.826 Fetching value of define "__PCLMUL__" : 1 00:01:32.826 Fetching value of define "__RDRND__" : 1 00:01:32.826 Fetching value of define "__RDSEED__" : (undefined) 00:01:32.826 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:32.826 Fetching value of define "__znver1__" : (undefined) 00:01:32.826 Fetching value of define "__znver2__" : (undefined) 00:01:32.826 Fetching value of define "__znver3__" : (undefined) 00:01:32.826 Fetching value of define "__znver4__" : (undefined) 00:01:32.826 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:32.826 Message: lib/log: Defining dependency "log" 00:01:32.826 Message: lib/kvargs: Defining dependency "kvargs" 00:01:32.826 Message: lib/telemetry: Defining dependency "telemetry" 00:01:32.826 Checking for function "getentropy" : NO 00:01:32.826 Message: lib/eal: Defining dependency "eal" 00:01:32.826 Message: lib/ring: Defining dependency "ring" 00:01:32.826 Message: lib/rcu: Defining dependency "rcu" 00:01:32.826 Message: lib/mempool: Defining dependency "mempool" 00:01:32.826 Message: lib/mbuf: Defining dependency "mbuf" 00:01:32.826 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:32.826 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.826 Compiler for C supports arguments -mpclmul: YES 00:01:32.826 Compiler for C supports arguments -maes: YES 00:01:32.826 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.826 Compiler for C supports arguments -mavx512bw: YES 00:01:32.826 Compiler for C supports arguments -mavx512dq: YES 00:01:32.826 Compiler for C supports arguments -mavx512vl: YES 00:01:32.826 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:32.826 Compiler for C supports arguments -mavx2: YES 00:01:32.826 Compiler for C supports arguments -mavx: YES 00:01:32.826 Message: lib/net: Defining dependency "net" 00:01:32.826 Message: lib/meter: Defining dependency "meter" 00:01:32.826 Message: lib/ethdev: Defining dependency "ethdev" 00:01:32.826 Message: lib/pci: Defining dependency "pci" 00:01:32.826 Message: lib/cmdline: Defining dependency "cmdline" 00:01:32.826 Message: lib/hash: Defining dependency "hash" 00:01:32.826 Message: lib/timer: Defining dependency "timer" 00:01:32.826 Message: lib/compressdev: Defining dependency "compressdev" 00:01:32.826 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:32.826 Message: lib/dmadev: Defining dependency "dmadev" 00:01:32.826 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:32.826 Message: lib/power: Defining dependency "power" 00:01:32.826 Message: lib/reorder: Defining dependency "reorder" 00:01:32.826 Message: lib/security: Defining dependency "security" 00:01:32.826 Has header "linux/userfaultfd.h" : YES 00:01:32.826 Has header "linux/vduse.h" : YES 00:01:32.826 Message: lib/vhost: Defining dependency "vhost" 00:01:32.826 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:32.826 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.826 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.826 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.826 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:32.826 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:32.826 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:32.826 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:32.826 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:32.826 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:32.826 Program doxygen found: YES (/usr/bin/doxygen) 00:01:32.826 Configuring doxy-api-html.conf using configuration 00:01:32.826 Configuring doxy-api-man.conf using configuration 00:01:32.826 Program mandb found: YES (/usr/bin/mandb) 00:01:32.826 Program sphinx-build found: NO 00:01:32.826 Configuring rte_build_config.h using configuration 00:01:32.826 Message: 00:01:32.826 ================= 00:01:32.826 Applications Enabled 00:01:32.826 ================= 00:01:32.826 00:01:32.826 apps: 00:01:32.826 00:01:32.826 00:01:32.827 Message: 00:01:32.827 ================= 00:01:32.827 Libraries Enabled 00:01:32.827 ================= 00:01:32.827 00:01:32.827 libs: 00:01:32.827 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:32.827 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:32.827 cryptodev, dmadev, power, reorder, security, vhost, 00:01:32.827 00:01:32.827 Message: 00:01:32.827 =============== 00:01:32.827 Drivers Enabled 00:01:32.827 =============== 00:01:32.827 00:01:32.827 common: 00:01:32.827 00:01:32.827 bus: 00:01:32.827 pci, vdev, 00:01:32.827 mempool: 00:01:32.827 ring, 00:01:32.827 dma: 00:01:32.827 00:01:32.827 net: 00:01:32.827 00:01:32.827 crypto: 00:01:32.827 00:01:32.827 compress: 00:01:32.827 00:01:32.827 vdpa: 00:01:32.827 00:01:32.827 00:01:32.827 Message: 00:01:32.827 ================= 00:01:32.827 Content Skipped 00:01:32.827 ================= 00:01:32.827 00:01:32.827 apps: 00:01:32.827 dumpcap: explicitly disabled via build config 00:01:32.827 graph: explicitly disabled via build config 00:01:32.827 pdump: explicitly disabled via build config 00:01:32.827 proc-info: explicitly disabled via build config 00:01:32.827 test-acl: explicitly disabled via build config 00:01:32.827 test-bbdev: explicitly disabled via build config 00:01:32.827 test-cmdline: explicitly disabled via build config 00:01:32.827 test-compress-perf: explicitly disabled via build config 00:01:32.827 test-crypto-perf: explicitly disabled via build config 00:01:32.827 test-dma-perf: explicitly disabled via build config 00:01:32.827 test-eventdev: explicitly disabled via build config 00:01:32.827 test-fib: explicitly disabled via build config 00:01:32.827 test-flow-perf: explicitly disabled via build config 00:01:32.827 test-gpudev: explicitly disabled via build config 00:01:32.827 test-mldev: explicitly disabled via build config 00:01:32.827 test-pipeline: explicitly disabled via build config 00:01:32.827 test-pmd: explicitly disabled via build config 00:01:32.827 test-regex: explicitly disabled via build config 00:01:32.827 test-sad: explicitly disabled via build config 00:01:32.827 test-security-perf: explicitly disabled via build config 00:01:32.827 00:01:32.827 libs: 00:01:32.827 argparse: explicitly disabled via build config 00:01:32.827 metrics: explicitly disabled via build config 00:01:32.827 acl: explicitly disabled via build config 00:01:32.827 bbdev: explicitly disabled via build config 00:01:32.827 bitratestats: explicitly disabled via build config 00:01:32.827 bpf: explicitly disabled via build config 00:01:32.827 cfgfile: explicitly disabled via build config 00:01:32.827 distributor: explicitly disabled via build config 00:01:32.827 efd: explicitly disabled via build config 00:01:32.827 eventdev: explicitly disabled via build config 00:01:32.827 dispatcher: explicitly disabled via build config 00:01:32.827 gpudev: explicitly disabled via build config 00:01:32.827 gro: explicitly disabled via build config 00:01:32.827 gso: explicitly disabled via build config 00:01:32.827 ip_frag: explicitly disabled via build config 00:01:32.827 jobstats: explicitly disabled via build config 00:01:32.827 latencystats: explicitly disabled via build config 00:01:32.827 lpm: explicitly disabled via build config 00:01:32.827 member: explicitly disabled via build config 00:01:32.827 pcapng: explicitly disabled via build config 00:01:32.827 rawdev: explicitly disabled via build config 00:01:32.827 regexdev: explicitly disabled via build config 00:01:32.827 mldev: explicitly disabled via build config 00:01:32.827 rib: explicitly disabled via build config 00:01:32.827 sched: explicitly disabled via build config 00:01:32.827 stack: explicitly disabled via build config 00:01:32.827 ipsec: explicitly disabled via build config 00:01:32.827 pdcp: explicitly disabled via build config 00:01:32.827 fib: explicitly disabled via build config 00:01:32.827 port: explicitly disabled via build config 00:01:32.827 pdump: explicitly disabled via build config 00:01:32.827 table: explicitly disabled via build config 00:01:32.827 pipeline: explicitly disabled via build config 00:01:32.827 graph: explicitly disabled via build config 00:01:32.827 node: explicitly disabled via build config 00:01:32.827 00:01:32.827 drivers: 00:01:32.827 common/cpt: not in enabled drivers build config 00:01:32.827 common/dpaax: not in enabled drivers build config 00:01:32.827 common/iavf: not in enabled drivers build config 00:01:32.827 common/idpf: not in enabled drivers build config 00:01:32.827 common/ionic: not in enabled drivers build config 00:01:32.827 common/mvep: not in enabled drivers build config 00:01:32.827 common/octeontx: not in enabled drivers build config 00:01:32.827 bus/auxiliary: not in enabled drivers build config 00:01:32.827 bus/cdx: not in enabled drivers build config 00:01:32.827 bus/dpaa: not in enabled drivers build config 00:01:32.827 bus/fslmc: not in enabled drivers build config 00:01:32.827 bus/ifpga: not in enabled drivers build config 00:01:32.827 bus/platform: not in enabled drivers build config 00:01:32.827 bus/uacce: not in enabled drivers build config 00:01:32.827 bus/vmbus: not in enabled drivers build config 00:01:32.827 common/cnxk: not in enabled drivers build config 00:01:32.827 common/mlx5: not in enabled drivers build config 00:01:32.827 common/nfp: not in enabled drivers build config 00:01:32.827 common/nitrox: not in enabled drivers build config 00:01:32.827 common/qat: not in enabled drivers build config 00:01:32.827 common/sfc_efx: not in enabled drivers build config 00:01:32.827 mempool/bucket: not in enabled drivers build config 00:01:32.827 mempool/cnxk: not in enabled drivers build config 00:01:32.827 mempool/dpaa: not in enabled drivers build config 00:01:32.827 mempool/dpaa2: not in enabled drivers build config 00:01:32.827 mempool/octeontx: not in enabled drivers build config 00:01:32.827 mempool/stack: not in enabled drivers build config 00:01:32.827 dma/cnxk: not in enabled drivers build config 00:01:32.827 dma/dpaa: not in enabled drivers build config 00:01:32.827 dma/dpaa2: not in enabled drivers build config 00:01:32.827 dma/hisilicon: not in enabled drivers build config 00:01:32.827 dma/idxd: not in enabled drivers build config 00:01:32.827 dma/ioat: not in enabled drivers build config 00:01:32.827 dma/skeleton: not in enabled drivers build config 00:01:32.827 net/af_packet: not in enabled drivers build config 00:01:32.827 net/af_xdp: not in enabled drivers build config 00:01:32.827 net/ark: not in enabled drivers build config 00:01:32.827 net/atlantic: not in enabled drivers build config 00:01:32.827 net/avp: not in enabled drivers build config 00:01:32.827 net/axgbe: not in enabled drivers build config 00:01:32.827 net/bnx2x: not in enabled drivers build config 00:01:32.827 net/bnxt: not in enabled drivers build config 00:01:32.827 net/bonding: not in enabled drivers build config 00:01:32.827 net/cnxk: not in enabled drivers build config 00:01:32.827 net/cpfl: not in enabled drivers build config 00:01:32.827 net/cxgbe: not in enabled drivers build config 00:01:32.827 net/dpaa: not in enabled drivers build config 00:01:32.827 net/dpaa2: not in enabled drivers build config 00:01:32.827 net/e1000: not in enabled drivers build config 00:01:32.827 net/ena: not in enabled drivers build config 00:01:32.827 net/enetc: not in enabled drivers build config 00:01:32.827 net/enetfec: not in enabled drivers build config 00:01:32.827 net/enic: not in enabled drivers build config 00:01:32.827 net/failsafe: not in enabled drivers build config 00:01:32.827 net/fm10k: not in enabled drivers build config 00:01:32.827 net/gve: not in enabled drivers build config 00:01:32.827 net/hinic: not in enabled drivers build config 00:01:32.827 net/hns3: not in enabled drivers build config 00:01:32.827 net/i40e: not in enabled drivers build config 00:01:32.827 net/iavf: not in enabled drivers build config 00:01:32.827 net/ice: not in enabled drivers build config 00:01:32.827 net/idpf: not in enabled drivers build config 00:01:32.827 net/igc: not in enabled drivers build config 00:01:32.827 net/ionic: not in enabled drivers build config 00:01:32.827 net/ipn3ke: not in enabled drivers build config 00:01:32.827 net/ixgbe: not in enabled drivers build config 00:01:32.827 net/mana: not in enabled drivers build config 00:01:32.827 net/memif: not in enabled drivers build config 00:01:32.827 net/mlx4: not in enabled drivers build config 00:01:32.827 net/mlx5: not in enabled drivers build config 00:01:32.827 net/mvneta: not in enabled drivers build config 00:01:32.827 net/mvpp2: not in enabled drivers build config 00:01:32.827 net/netvsc: not in enabled drivers build config 00:01:32.827 net/nfb: not in enabled drivers build config 00:01:32.827 net/nfp: not in enabled drivers build config 00:01:32.827 net/ngbe: not in enabled drivers build config 00:01:32.828 net/null: not in enabled drivers build config 00:01:32.828 net/octeontx: not in enabled drivers build config 00:01:32.828 net/octeon_ep: not in enabled drivers build config 00:01:32.828 net/pcap: not in enabled drivers build config 00:01:32.828 net/pfe: not in enabled drivers build config 00:01:32.828 net/qede: not in enabled drivers build config 00:01:32.828 net/ring: not in enabled drivers build config 00:01:32.828 net/sfc: not in enabled drivers build config 00:01:32.828 net/softnic: not in enabled drivers build config 00:01:32.828 net/tap: not in enabled drivers build config 00:01:32.828 net/thunderx: not in enabled drivers build config 00:01:32.828 net/txgbe: not in enabled drivers build config 00:01:32.828 net/vdev_netvsc: not in enabled drivers build config 00:01:32.828 net/vhost: not in enabled drivers build config 00:01:32.828 net/virtio: not in enabled drivers build config 00:01:32.828 net/vmxnet3: not in enabled drivers build config 00:01:32.828 raw/*: missing internal dependency, "rawdev" 00:01:32.828 crypto/armv8: not in enabled drivers build config 00:01:32.828 crypto/bcmfs: not in enabled drivers build config 00:01:32.828 crypto/caam_jr: not in enabled drivers build config 00:01:32.828 crypto/ccp: not in enabled drivers build config 00:01:32.828 crypto/cnxk: not in enabled drivers build config 00:01:32.828 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.828 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.828 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.828 crypto/mlx5: not in enabled drivers build config 00:01:32.828 crypto/mvsam: not in enabled drivers build config 00:01:32.828 crypto/nitrox: not in enabled drivers build config 00:01:32.828 crypto/null: not in enabled drivers build config 00:01:32.828 crypto/octeontx: not in enabled drivers build config 00:01:32.828 crypto/openssl: not in enabled drivers build config 00:01:32.828 crypto/scheduler: not in enabled drivers build config 00:01:32.828 crypto/uadk: not in enabled drivers build config 00:01:32.828 crypto/virtio: not in enabled drivers build config 00:01:32.828 compress/isal: not in enabled drivers build config 00:01:32.828 compress/mlx5: not in enabled drivers build config 00:01:32.828 compress/nitrox: not in enabled drivers build config 00:01:32.828 compress/octeontx: not in enabled drivers build config 00:01:32.828 compress/zlib: not in enabled drivers build config 00:01:32.828 regex/*: missing internal dependency, "regexdev" 00:01:32.828 ml/*: missing internal dependency, "mldev" 00:01:32.828 vdpa/ifc: not in enabled drivers build config 00:01:32.828 vdpa/mlx5: not in enabled drivers build config 00:01:32.828 vdpa/nfp: not in enabled drivers build config 00:01:32.828 vdpa/sfc: not in enabled drivers build config 00:01:32.828 event/*: missing internal dependency, "eventdev" 00:01:32.828 baseband/*: missing internal dependency, "bbdev" 00:01:32.828 gpu/*: missing internal dependency, "gpudev" 00:01:32.828 00:01:32.828 00:01:32.828 Build targets in project: 85 00:01:32.828 00:01:32.828 DPDK 24.03.0 00:01:32.828 00:01:32.828 User defined options 00:01:32.828 buildtype : debug 00:01:32.828 default_library : shared 00:01:32.828 libdir : lib 00:01:32.828 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:32.828 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:32.828 c_link_args : 00:01:32.828 cpu_instruction_set: native 00:01:32.828 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:32.828 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:32.828 enable_docs : false 00:01:32.828 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:32.828 enable_kmods : false 00:01:32.828 max_lcores : 128 00:01:32.828 tests : false 00:01:32.828 00:01:32.828 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.828 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:32.828 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.828 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.828 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.828 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.828 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.828 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.828 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.828 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.828 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.828 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.828 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.828 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.828 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.828 [14/268] Linking static target lib/librte_log.a 00:01:32.828 [15/268] Linking static target lib/librte_kvargs.a 00:01:32.828 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.771 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:33.771 [18/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.771 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:33.771 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:33.771 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:33.771 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:33.771 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:33.771 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:33.771 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:33.771 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:33.771 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:33.771 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:33.771 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:33.771 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:33.771 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:33.771 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.771 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:33.771 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:33.771 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:33.771 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:33.771 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:33.771 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:33.771 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.771 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:33.771 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:33.771 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.771 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.771 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:33.771 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:33.771 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:33.771 [47/268] Linking static target lib/librte_telemetry.a 00:01:33.771 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:33.771 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:33.771 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.771 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.771 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.771 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.771 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:33.771 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.771 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:33.771 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:33.771 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.771 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.771 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.032 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:34.032 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:34.032 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:34.032 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:34.032 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.032 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:34.032 [67/268] Linking target lib/librte_log.so.24.1 00:01:34.298 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:34.298 [69/268] Linking static target lib/librte_pci.a 00:01:34.298 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:34.559 [71/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:34.559 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:34.559 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:34.559 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:34.559 [75/268] Linking target lib/librte_kvargs.so.24.1 00:01:34.559 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:34.559 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:34.559 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:34.559 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.559 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:34.559 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:34.559 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.559 [83/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:34.559 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:34.559 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.559 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.559 [87/268] Linking static target lib/librte_ring.a 00:01:34.559 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:34.559 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:34.820 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.820 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.820 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:34.820 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:34.820 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.820 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.820 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.820 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:34.820 [98/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:34.820 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:34.820 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:34.820 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.820 [102/268] Linking static target lib/librte_meter.a 00:01:34.820 [103/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.820 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:34.820 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.820 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.820 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:34.820 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:34.820 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.820 [110/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:34.820 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.820 [112/268] Linking static target lib/librte_eal.a 00:01:34.820 [113/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.820 [114/268] Linking static target lib/librte_rcu.a 00:01:34.820 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.820 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.820 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:34.820 [118/268] Linking static target lib/librte_mempool.a 00:01:34.820 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.820 [120/268] Linking target lib/librte_telemetry.so.24.1 00:01:34.820 [121/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.101 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.101 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.101 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:35.101 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:35.101 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:35.101 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:35.101 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:35.101 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:35.101 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:35.101 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:35.101 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:35.101 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:35.371 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:35.371 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:35.371 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.371 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:35.371 [138/268] Linking static target lib/librte_net.a 00:01:35.371 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.371 [140/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.371 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:35.371 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:35.630 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.630 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.630 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.630 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:35.630 [147/268] Linking static target lib/librte_cmdline.a 00:01:35.630 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.630 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.630 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.630 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.630 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.888 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.888 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.888 [155/268] Linking static target lib/librte_timer.a 00:01:35.888 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.888 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.888 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.888 [159/268] Linking static target lib/librte_dmadev.a 00:01:35.888 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.888 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.888 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.888 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.888 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.888 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:36.146 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:36.146 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:36.146 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.146 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:36.146 [170/268] Linking static target lib/librte_compressdev.a 00:01:36.146 [171/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:36.146 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.146 [173/268] Linking static target lib/librte_power.a 00:01:36.146 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.146 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:36.146 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:36.146 [177/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:36.146 [178/268] Linking static target lib/librte_hash.a 00:01:36.146 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:36.146 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:36.404 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:36.404 [182/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:36.404 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:36.404 [184/268] Linking static target lib/librte_mbuf.a 00:01:36.404 [185/268] Linking static target lib/librte_reorder.a 00:01:36.404 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:36.404 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:36.404 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:36.404 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:36.404 [190/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:36.404 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:36.404 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:36.404 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:36.404 [194/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.404 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.404 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:36.661 [198/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.661 [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:36.661 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:36.661 [201/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.661 [202/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.661 [204/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.661 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.661 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.661 [208/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.661 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:36.661 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [211/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:36.661 [212/268] Linking static target lib/librte_security.a 00:01:36.661 [213/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [214/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.661 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.919 [216/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.919 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.919 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.919 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.919 [220/268] Linking static target drivers/librte_mempool_ring.a 00:01:36.919 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:37.176 [222/268] Linking static target lib/librte_cryptodev.a 00:01:37.176 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.176 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:37.176 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.176 [226/268] Linking static target lib/librte_ethdev.a 00:01:38.110 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.482 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.381 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.381 [230/268] Linking target lib/librte_eal.so.24.1 00:01:41.381 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.381 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:41.381 [233/268] Linking target lib/librte_ring.so.24.1 00:01:41.381 [234/268] Linking target lib/librte_timer.so.24.1 00:01:41.381 [235/268] Linking target lib/librte_meter.so.24.1 00:01:41.381 [236/268] Linking target lib/librte_pci.so.24.1 00:01:41.381 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:41.381 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:41.638 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:41.638 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:41.638 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:41.638 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:41.638 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:41.638 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:41.638 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:41.638 [246/268] Linking target lib/librte_mempool.so.24.1 00:01:41.638 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:41.638 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:41.638 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:41.638 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:41.896 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:41.896 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:41.896 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:41.896 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:01:41.896 [255/268] Linking target lib/librte_net.so.24.1 00:01:42.155 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:42.155 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:42.155 [258/268] Linking target lib/librte_hash.so.24.1 00:01:42.155 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:42.155 [260/268] Linking target lib/librte_security.so.24.1 00:01:42.155 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:42.155 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:42.155 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:42.155 [264/268] Linking target lib/librte_power.so.24.1 00:01:44.681 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.681 [266/268] Linking static target lib/librte_vhost.a 00:01:45.616 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.874 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:45.874 INFO: autodetecting backend as ninja 00:01:45.874 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:46.807 CC lib/log/log.o 00:01:46.807 CC lib/log/log_flags.o 00:01:46.807 CC lib/log/log_deprecated.o 00:01:46.807 CC lib/ut/ut.o 00:01:46.807 CC lib/ut_mock/mock.o 00:01:46.807 LIB libspdk_log.a 00:01:46.807 LIB libspdk_ut.a 00:01:46.807 LIB libspdk_ut_mock.a 00:01:46.807 SO libspdk_ut.so.2.0 00:01:46.807 SO libspdk_ut_mock.so.6.0 00:01:46.807 SO libspdk_log.so.7.0 00:01:46.807 SYMLINK libspdk_ut_mock.so 00:01:46.807 SYMLINK libspdk_ut.so 00:01:46.807 SYMLINK libspdk_log.so 00:01:47.064 CC lib/ioat/ioat.o 00:01:47.064 CC lib/dma/dma.o 00:01:47.064 CC lib/util/base64.o 00:01:47.064 CXX lib/trace_parser/trace.o 00:01:47.064 CC lib/util/bit_array.o 00:01:47.064 CC lib/util/cpuset.o 00:01:47.064 CC lib/util/crc16.o 00:01:47.064 CC lib/util/crc32.o 00:01:47.064 CC lib/util/crc32c.o 00:01:47.064 CC lib/util/crc32_ieee.o 00:01:47.064 CC lib/util/crc64.o 00:01:47.064 CC lib/util/dif.o 00:01:47.064 CC lib/util/fd.o 00:01:47.064 CC lib/util/file.o 00:01:47.064 CC lib/util/hexlify.o 00:01:47.064 CC lib/util/iov.o 00:01:47.064 CC lib/util/math.o 00:01:47.064 CC lib/util/pipe.o 00:01:47.064 CC lib/util/strerror_tls.o 00:01:47.064 CC lib/util/string.o 00:01:47.064 CC lib/util/uuid.o 00:01:47.064 CC lib/util/fd_group.o 00:01:47.064 CC lib/util/xor.o 00:01:47.064 CC lib/util/zipf.o 00:01:47.064 CC lib/vfio_user/host/vfio_user_pci.o 00:01:47.064 CC lib/vfio_user/host/vfio_user.o 00:01:47.322 LIB libspdk_dma.a 00:01:47.322 SO libspdk_dma.so.4.0 00:01:47.322 SYMLINK libspdk_dma.so 00:01:47.580 LIB libspdk_ioat.a 00:01:47.580 LIB libspdk_vfio_user.a 00:01:47.580 SO libspdk_ioat.so.7.0 00:01:47.580 SO libspdk_vfio_user.so.5.0 00:01:47.580 SYMLINK libspdk_ioat.so 00:01:47.580 SYMLINK libspdk_vfio_user.so 00:01:47.580 LIB libspdk_util.a 00:01:47.580 SO libspdk_util.so.9.1 00:01:47.844 SYMLINK libspdk_util.so 00:01:48.180 CC lib/json/json_parse.o 00:01:48.180 CC lib/env_dpdk/env.o 00:01:48.180 CC lib/rdma_provider/common.o 00:01:48.180 CC lib/conf/conf.o 00:01:48.180 CC lib/json/json_util.o 00:01:48.180 CC lib/env_dpdk/memory.o 00:01:48.180 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:48.180 CC lib/json/json_write.o 00:01:48.180 CC lib/env_dpdk/pci.o 00:01:48.180 CC lib/idxd/idxd.o 00:01:48.180 CC lib/vmd/vmd.o 00:01:48.180 CC lib/rdma_utils/rdma_utils.o 00:01:48.180 CC lib/env_dpdk/init.o 00:01:48.180 CC lib/vmd/led.o 00:01:48.180 CC lib/idxd/idxd_user.o 00:01:48.180 CC lib/env_dpdk/threads.o 00:01:48.180 CC lib/idxd/idxd_kernel.o 00:01:48.180 CC lib/env_dpdk/pci_ioat.o 00:01:48.180 CC lib/env_dpdk/pci_virtio.o 00:01:48.180 CC lib/env_dpdk/pci_vmd.o 00:01:48.180 CC lib/env_dpdk/pci_idxd.o 00:01:48.180 CC lib/env_dpdk/pci_event.o 00:01:48.180 CC lib/env_dpdk/sigbus_handler.o 00:01:48.180 CC lib/env_dpdk/pci_dpdk.o 00:01:48.180 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:48.180 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:48.180 LIB libspdk_trace_parser.a 00:01:48.180 SO libspdk_trace_parser.so.5.0 00:01:48.180 LIB libspdk_rdma_provider.a 00:01:48.180 SYMLINK libspdk_trace_parser.so 00:01:48.180 SO libspdk_rdma_provider.so.6.0 00:01:48.180 LIB libspdk_conf.a 00:01:48.452 SO libspdk_conf.so.6.0 00:01:48.452 LIB libspdk_rdma_utils.a 00:01:48.452 SYMLINK libspdk_rdma_provider.so 00:01:48.452 LIB libspdk_json.a 00:01:48.452 SO libspdk_rdma_utils.so.1.0 00:01:48.452 SYMLINK libspdk_conf.so 00:01:48.452 SO libspdk_json.so.6.0 00:01:48.452 SYMLINK libspdk_rdma_utils.so 00:01:48.452 SYMLINK libspdk_json.so 00:01:48.452 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.452 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.452 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.452 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.710 LIB libspdk_idxd.a 00:01:48.710 SO libspdk_idxd.so.12.0 00:01:48.710 SYMLINK libspdk_idxd.so 00:01:48.710 LIB libspdk_vmd.a 00:01:48.710 SO libspdk_vmd.so.6.0 00:01:48.710 SYMLINK libspdk_vmd.so 00:01:48.710 LIB libspdk_jsonrpc.a 00:01:48.967 SO libspdk_jsonrpc.so.6.0 00:01:48.967 SYMLINK libspdk_jsonrpc.so 00:01:49.225 CC lib/rpc/rpc.o 00:01:49.225 LIB libspdk_rpc.a 00:01:49.225 SO libspdk_rpc.so.6.0 00:01:49.481 SYMLINK libspdk_rpc.so 00:01:49.481 CC lib/notify/notify.o 00:01:49.481 CC lib/keyring/keyring.o 00:01:49.481 CC lib/notify/notify_rpc.o 00:01:49.481 CC lib/keyring/keyring_rpc.o 00:01:49.481 CC lib/trace/trace.o 00:01:49.481 CC lib/trace/trace_flags.o 00:01:49.481 CC lib/trace/trace_rpc.o 00:01:49.738 LIB libspdk_notify.a 00:01:49.738 SO libspdk_notify.so.6.0 00:01:49.738 LIB libspdk_keyring.a 00:01:49.738 SYMLINK libspdk_notify.so 00:01:49.738 LIB libspdk_trace.a 00:01:49.738 SO libspdk_keyring.so.1.0 00:01:49.738 SO libspdk_trace.so.10.0 00:01:49.738 SYMLINK libspdk_keyring.so 00:01:49.995 SYMLINK libspdk_trace.so 00:01:49.995 LIB libspdk_env_dpdk.a 00:01:49.995 CC lib/thread/thread.o 00:01:49.995 CC lib/thread/iobuf.o 00:01:49.995 CC lib/sock/sock.o 00:01:49.995 CC lib/sock/sock_rpc.o 00:01:49.995 SO libspdk_env_dpdk.so.14.1 00:01:50.251 SYMLINK libspdk_env_dpdk.so 00:01:50.509 LIB libspdk_sock.a 00:01:50.509 SO libspdk_sock.so.10.0 00:01:50.509 SYMLINK libspdk_sock.so 00:01:50.766 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.766 CC lib/nvme/nvme_ctrlr.o 00:01:50.766 CC lib/nvme/nvme_fabric.o 00:01:50.766 CC lib/nvme/nvme_ns_cmd.o 00:01:50.766 CC lib/nvme/nvme_ns.o 00:01:50.766 CC lib/nvme/nvme_pcie_common.o 00:01:50.766 CC lib/nvme/nvme_pcie.o 00:01:50.766 CC lib/nvme/nvme_qpair.o 00:01:50.766 CC lib/nvme/nvme.o 00:01:50.766 CC lib/nvme/nvme_quirks.o 00:01:50.766 CC lib/nvme/nvme_transport.o 00:01:50.766 CC lib/nvme/nvme_discovery.o 00:01:50.766 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.766 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.766 CC lib/nvme/nvme_tcp.o 00:01:50.766 CC lib/nvme/nvme_opal.o 00:01:50.766 CC lib/nvme/nvme_io_msg.o 00:01:50.766 CC lib/nvme/nvme_poll_group.o 00:01:50.766 CC lib/nvme/nvme_zns.o 00:01:50.766 CC lib/nvme/nvme_stubs.o 00:01:50.766 CC lib/nvme/nvme_auth.o 00:01:50.766 CC lib/nvme/nvme_cuse.o 00:01:50.766 CC lib/nvme/nvme_vfio_user.o 00:01:50.766 CC lib/nvme/nvme_rdma.o 00:01:51.698 LIB libspdk_thread.a 00:01:51.698 SO libspdk_thread.so.10.1 00:01:51.698 SYMLINK libspdk_thread.so 00:01:51.956 CC lib/accel/accel.o 00:01:51.956 CC lib/vfu_tgt/tgt_endpoint.o 00:01:51.956 CC lib/virtio/virtio.o 00:01:51.956 CC lib/accel/accel_rpc.o 00:01:51.956 CC lib/vfu_tgt/tgt_rpc.o 00:01:51.956 CC lib/virtio/virtio_vhost_user.o 00:01:51.956 CC lib/accel/accel_sw.o 00:01:51.956 CC lib/virtio/virtio_vfio_user.o 00:01:51.956 CC lib/blob/blobstore.o 00:01:51.956 CC lib/virtio/virtio_pci.o 00:01:51.956 CC lib/blob/request.o 00:01:51.956 CC lib/init/json_config.o 00:01:51.956 CC lib/blob/zeroes.o 00:01:51.956 CC lib/init/subsystem.o 00:01:51.956 CC lib/blob/blob_bs_dev.o 00:01:51.956 CC lib/init/subsystem_rpc.o 00:01:51.956 CC lib/init/rpc.o 00:01:52.213 LIB libspdk_init.a 00:01:52.213 SO libspdk_init.so.5.0 00:01:52.213 LIB libspdk_virtio.a 00:01:52.213 LIB libspdk_vfu_tgt.a 00:01:52.213 SYMLINK libspdk_init.so 00:01:52.213 SO libspdk_vfu_tgt.so.3.0 00:01:52.213 SO libspdk_virtio.so.7.0 00:01:52.213 SYMLINK libspdk_vfu_tgt.so 00:01:52.213 SYMLINK libspdk_virtio.so 00:01:52.471 CC lib/event/app.o 00:01:52.471 CC lib/event/reactor.o 00:01:52.471 CC lib/event/log_rpc.o 00:01:52.471 CC lib/event/app_rpc.o 00:01:52.471 CC lib/event/scheduler_static.o 00:01:52.728 LIB libspdk_event.a 00:01:52.728 SO libspdk_event.so.14.0 00:01:52.985 LIB libspdk_accel.a 00:01:52.985 SYMLINK libspdk_event.so 00:01:52.985 SO libspdk_accel.so.15.1 00:01:52.985 SYMLINK libspdk_accel.so 00:01:52.985 LIB libspdk_nvme.a 00:01:53.242 CC lib/bdev/bdev.o 00:01:53.242 CC lib/bdev/bdev_rpc.o 00:01:53.242 CC lib/bdev/bdev_zone.o 00:01:53.242 CC lib/bdev/part.o 00:01:53.242 CC lib/bdev/scsi_nvme.o 00:01:53.242 SO libspdk_nvme.so.13.1 00:01:53.498 SYMLINK libspdk_nvme.so 00:01:54.870 LIB libspdk_blob.a 00:01:54.870 SO libspdk_blob.so.11.0 00:01:54.870 SYMLINK libspdk_blob.so 00:01:55.128 CC lib/lvol/lvol.o 00:01:55.128 CC lib/blobfs/blobfs.o 00:01:55.128 CC lib/blobfs/tree.o 00:01:55.693 LIB libspdk_bdev.a 00:01:55.693 SO libspdk_bdev.so.15.1 00:01:55.693 SYMLINK libspdk_bdev.so 00:01:55.961 LIB libspdk_blobfs.a 00:01:55.961 SO libspdk_blobfs.so.10.0 00:01:55.961 SYMLINK libspdk_blobfs.so 00:01:55.961 LIB libspdk_lvol.a 00:01:55.961 CC lib/nbd/nbd.o 00:01:55.961 CC lib/nbd/nbd_rpc.o 00:01:55.961 CC lib/scsi/dev.o 00:01:55.961 CC lib/scsi/lun.o 00:01:55.961 CC lib/ublk/ublk.o 00:01:55.961 CC lib/nvmf/ctrlr.o 00:01:55.961 CC lib/scsi/port.o 00:01:55.961 CC lib/ublk/ublk_rpc.o 00:01:55.961 CC lib/ftl/ftl_core.o 00:01:55.961 CC lib/nvmf/ctrlr_discovery.o 00:01:55.961 CC lib/scsi/scsi.o 00:01:55.961 CC lib/ftl/ftl_init.o 00:01:55.961 CC lib/nvmf/ctrlr_bdev.o 00:01:55.961 CC lib/scsi/scsi_bdev.o 00:01:55.961 CC lib/ftl/ftl_layout.o 00:01:55.961 CC lib/scsi/scsi_pr.o 00:01:55.961 CC lib/nvmf/subsystem.o 00:01:55.961 CC lib/nvmf/nvmf.o 00:01:55.961 CC lib/scsi/scsi_rpc.o 00:01:55.961 CC lib/ftl/ftl_debug.o 00:01:55.961 CC lib/ftl/ftl_io.o 00:01:55.961 CC lib/nvmf/nvmf_rpc.o 00:01:55.961 CC lib/nvmf/transport.o 00:01:55.961 CC lib/scsi/task.o 00:01:55.961 CC lib/ftl/ftl_l2p.o 00:01:55.961 CC lib/ftl/ftl_sb.o 00:01:55.961 CC lib/nvmf/tcp.o 00:01:55.961 CC lib/ftl/ftl_l2p_flat.o 00:01:55.961 CC lib/nvmf/stubs.o 00:01:55.962 CC lib/ftl/ftl_nv_cache.o 00:01:55.962 CC lib/ftl/ftl_band.o 00:01:55.962 CC lib/nvmf/mdns_server.o 00:01:55.962 CC lib/nvmf/vfio_user.o 00:01:55.962 CC lib/nvmf/auth.o 00:01:55.962 CC lib/nvmf/rdma.o 00:01:55.962 CC lib/ftl/ftl_band_ops.o 00:01:55.962 CC lib/ftl/ftl_writer.o 00:01:55.962 CC lib/ftl/ftl_rq.o 00:01:55.962 CC lib/ftl/ftl_reloc.o 00:01:55.962 CC lib/ftl/ftl_l2p_cache.o 00:01:55.962 CC lib/ftl/ftl_p2l.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.962 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:55.962 SO libspdk_lvol.so.10.0 00:01:56.220 SYMLINK libspdk_lvol.so 00:01:56.220 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.220 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.220 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.484 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.484 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.484 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.484 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.484 CC lib/ftl/utils/ftl_conf.o 00:01:56.484 CC lib/ftl/utils/ftl_md.o 00:01:56.484 CC lib/ftl/utils/ftl_mempool.o 00:01:56.484 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.484 CC lib/ftl/utils/ftl_property.o 00:01:56.484 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.484 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.484 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.484 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.742 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.742 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.742 CC lib/ftl/base/ftl_base_dev.o 00:01:56.742 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.742 CC lib/ftl/ftl_trace.o 00:01:56.742 LIB libspdk_nbd.a 00:01:56.742 SO libspdk_nbd.so.7.0 00:01:57.000 SYMLINK libspdk_nbd.so 00:01:57.000 LIB libspdk_scsi.a 00:01:57.000 SO libspdk_scsi.so.9.0 00:01:57.000 LIB libspdk_ublk.a 00:01:57.000 SO libspdk_ublk.so.3.0 00:01:57.000 SYMLINK libspdk_scsi.so 00:01:57.000 SYMLINK libspdk_ublk.so 00:01:57.258 CC lib/vhost/vhost.o 00:01:57.258 CC lib/iscsi/conn.o 00:01:57.258 CC lib/vhost/vhost_rpc.o 00:01:57.258 CC lib/iscsi/init_grp.o 00:01:57.258 CC lib/iscsi/iscsi.o 00:01:57.258 CC lib/vhost/vhost_scsi.o 00:01:57.258 CC lib/vhost/vhost_blk.o 00:01:57.258 CC lib/iscsi/md5.o 00:01:57.258 CC lib/vhost/rte_vhost_user.o 00:01:57.258 CC lib/iscsi/param.o 00:01:57.258 CC lib/iscsi/portal_grp.o 00:01:57.258 CC lib/iscsi/tgt_node.o 00:01:57.258 CC lib/iscsi/iscsi_subsystem.o 00:01:57.258 CC lib/iscsi/iscsi_rpc.o 00:01:57.258 CC lib/iscsi/task.o 00:01:57.516 LIB libspdk_ftl.a 00:01:57.516 SO libspdk_ftl.so.9.0 00:01:58.080 SYMLINK libspdk_ftl.so 00:01:58.339 LIB libspdk_vhost.a 00:01:58.597 SO libspdk_vhost.so.8.0 00:01:58.597 LIB libspdk_nvmf.a 00:01:58.597 SO libspdk_nvmf.so.18.1 00:01:58.597 SYMLINK libspdk_vhost.so 00:01:58.597 LIB libspdk_iscsi.a 00:01:58.597 SO libspdk_iscsi.so.8.0 00:01:58.855 SYMLINK libspdk_nvmf.so 00:01:58.855 SYMLINK libspdk_iscsi.so 00:01:59.114 CC module/vfu_device/vfu_virtio.o 00:01:59.114 CC module/vfu_device/vfu_virtio_blk.o 00:01:59.114 CC module/vfu_device/vfu_virtio_scsi.o 00:01:59.114 CC module/vfu_device/vfu_virtio_rpc.o 00:01:59.114 CC module/env_dpdk/env_dpdk_rpc.o 00:01:59.114 CC module/blob/bdev/blob_bdev.o 00:01:59.114 CC module/accel/iaa/accel_iaa.o 00:01:59.114 CC module/accel/error/accel_error.o 00:01:59.114 CC module/accel/dsa/accel_dsa.o 00:01:59.114 CC module/accel/iaa/accel_iaa_rpc.o 00:01:59.114 CC module/accel/ioat/accel_ioat.o 00:01:59.114 CC module/keyring/linux/keyring.o 00:01:59.114 CC module/scheduler/gscheduler/gscheduler.o 00:01:59.114 CC module/accel/error/accel_error_rpc.o 00:01:59.114 CC module/sock/posix/posix.o 00:01:59.114 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:59.114 CC module/keyring/linux/keyring_rpc.o 00:01:59.114 CC module/accel/dsa/accel_dsa_rpc.o 00:01:59.114 CC module/accel/ioat/accel_ioat_rpc.o 00:01:59.114 CC module/keyring/file/keyring.o 00:01:59.114 CC module/keyring/file/keyring_rpc.o 00:01:59.114 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:59.372 LIB libspdk_env_dpdk_rpc.a 00:01:59.372 SO libspdk_env_dpdk_rpc.so.6.0 00:01:59.372 SYMLINK libspdk_env_dpdk_rpc.so 00:01:59.372 LIB libspdk_keyring_linux.a 00:01:59.372 LIB libspdk_keyring_file.a 00:01:59.372 LIB libspdk_scheduler_gscheduler.a 00:01:59.372 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.372 SO libspdk_keyring_linux.so.1.0 00:01:59.372 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.372 SO libspdk_keyring_file.so.1.0 00:01:59.372 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.372 LIB libspdk_accel_error.a 00:01:59.372 LIB libspdk_accel_ioat.a 00:01:59.372 LIB libspdk_scheduler_dynamic.a 00:01:59.372 LIB libspdk_accel_iaa.a 00:01:59.372 SO libspdk_accel_error.so.2.0 00:01:59.372 SO libspdk_accel_ioat.so.6.0 00:01:59.372 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.372 SYMLINK libspdk_keyring_linux.so 00:01:59.372 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.372 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.372 SYMLINK libspdk_keyring_file.so 00:01:59.372 SO libspdk_accel_iaa.so.3.0 00:01:59.630 LIB libspdk_accel_dsa.a 00:01:59.630 SYMLINK libspdk_accel_error.so 00:01:59.630 LIB libspdk_blob_bdev.a 00:01:59.630 SYMLINK libspdk_accel_ioat.so 00:01:59.630 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.630 SO libspdk_accel_dsa.so.5.0 00:01:59.630 SYMLINK libspdk_accel_iaa.so 00:01:59.630 SO libspdk_blob_bdev.so.11.0 00:01:59.630 SYMLINK libspdk_accel_dsa.so 00:01:59.630 SYMLINK libspdk_blob_bdev.so 00:01:59.889 LIB libspdk_vfu_device.a 00:01:59.889 SO libspdk_vfu_device.so.3.0 00:01:59.889 CC module/bdev/error/vbdev_error.o 00:01:59.889 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.889 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.889 CC module/bdev/gpt/gpt.o 00:01:59.889 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.889 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.889 CC module/bdev/delay/vbdev_delay.o 00:01:59.889 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.889 CC module/bdev/malloc/bdev_malloc.o 00:01:59.889 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.889 CC module/bdev/split/vbdev_split.o 00:01:59.889 CC module/bdev/null/bdev_null.o 00:01:59.889 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.889 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.889 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.889 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.889 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.889 CC module/bdev/aio/bdev_aio.o 00:01:59.889 CC module/bdev/null/bdev_null_rpc.o 00:01:59.889 CC module/bdev/raid/bdev_raid.o 00:01:59.889 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.889 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.889 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.889 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.889 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.889 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.889 CC module/bdev/raid/raid0.o 00:01:59.889 CC module/bdev/ftl/bdev_ftl.o 00:01:59.889 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.889 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.889 CC module/bdev/raid/raid1.o 00:01:59.889 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.889 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:59.889 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.889 CC module/bdev/raid/concat.o 00:01:59.889 CC module/bdev/nvme/bdev_nvme.o 00:01:59.889 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.889 CC module/bdev/nvme/nvme_rpc.o 00:01:59.889 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.889 CC module/bdev/nvme/vbdev_opal.o 00:01:59.889 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.889 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.889 SYMLINK libspdk_vfu_device.so 00:02:00.148 LIB libspdk_sock_posix.a 00:02:00.148 SO libspdk_sock_posix.so.6.0 00:02:00.148 LIB libspdk_bdev_error.a 00:02:00.148 SO libspdk_bdev_error.so.6.0 00:02:00.148 LIB libspdk_blobfs_bdev.a 00:02:00.148 LIB libspdk_bdev_ftl.a 00:02:00.406 SO libspdk_blobfs_bdev.so.6.0 00:02:00.406 SYMLINK libspdk_bdev_error.so 00:02:00.406 SYMLINK libspdk_sock_posix.so 00:02:00.406 LIB libspdk_bdev_delay.a 00:02:00.406 SO libspdk_bdev_ftl.so.6.0 00:02:00.406 LIB libspdk_bdev_split.a 00:02:00.406 LIB libspdk_bdev_gpt.a 00:02:00.406 SO libspdk_bdev_delay.so.6.0 00:02:00.406 SO libspdk_bdev_split.so.6.0 00:02:00.406 SYMLINK libspdk_blobfs_bdev.so 00:02:00.406 SO libspdk_bdev_gpt.so.6.0 00:02:00.406 SYMLINK libspdk_bdev_ftl.so 00:02:00.406 LIB libspdk_bdev_null.a 00:02:00.406 SYMLINK libspdk_bdev_split.so 00:02:00.406 LIB libspdk_bdev_aio.a 00:02:00.406 SYMLINK libspdk_bdev_delay.so 00:02:00.406 SO libspdk_bdev_null.so.6.0 00:02:00.406 SYMLINK libspdk_bdev_gpt.so 00:02:00.406 LIB libspdk_bdev_passthru.a 00:02:00.406 SO libspdk_bdev_aio.so.6.0 00:02:00.406 LIB libspdk_bdev_zone_block.a 00:02:00.406 LIB libspdk_bdev_iscsi.a 00:02:00.406 SO libspdk_bdev_passthru.so.6.0 00:02:00.406 SO libspdk_bdev_zone_block.so.6.0 00:02:00.406 SYMLINK libspdk_bdev_null.so 00:02:00.406 SO libspdk_bdev_iscsi.so.6.0 00:02:00.406 SYMLINK libspdk_bdev_aio.so 00:02:00.406 SYMLINK libspdk_bdev_passthru.so 00:02:00.406 SYMLINK libspdk_bdev_zone_block.so 00:02:00.406 LIB libspdk_bdev_malloc.a 00:02:00.406 SYMLINK libspdk_bdev_iscsi.so 00:02:00.664 SO libspdk_bdev_malloc.so.6.0 00:02:00.664 LIB libspdk_bdev_lvol.a 00:02:00.664 SYMLINK libspdk_bdev_malloc.so 00:02:00.664 SO libspdk_bdev_lvol.so.6.0 00:02:00.664 LIB libspdk_bdev_virtio.a 00:02:00.664 SO libspdk_bdev_virtio.so.6.0 00:02:00.664 SYMLINK libspdk_bdev_lvol.so 00:02:00.664 SYMLINK libspdk_bdev_virtio.so 00:02:00.921 LIB libspdk_bdev_raid.a 00:02:00.921 SO libspdk_bdev_raid.so.6.0 00:02:01.181 SYMLINK libspdk_bdev_raid.so 00:02:02.146 LIB libspdk_bdev_nvme.a 00:02:02.146 SO libspdk_bdev_nvme.so.7.0 00:02:02.403 SYMLINK libspdk_bdev_nvme.so 00:02:02.661 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.661 CC module/event/subsystems/sock/sock.o 00:02:02.661 CC module/event/subsystems/keyring/keyring.o 00:02:02.661 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:02.661 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.661 CC module/event/subsystems/vmd/vmd.o 00:02:02.661 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.661 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.661 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.919 LIB libspdk_event_keyring.a 00:02:02.919 LIB libspdk_event_vhost_blk.a 00:02:02.919 LIB libspdk_event_vfu_tgt.a 00:02:02.919 LIB libspdk_event_scheduler.a 00:02:02.919 LIB libspdk_event_vmd.a 00:02:02.919 LIB libspdk_event_sock.a 00:02:02.919 SO libspdk_event_keyring.so.1.0 00:02:02.919 LIB libspdk_event_iobuf.a 00:02:02.919 SO libspdk_event_vhost_blk.so.3.0 00:02:02.919 SO libspdk_event_vfu_tgt.so.3.0 00:02:02.919 SO libspdk_event_scheduler.so.4.0 00:02:02.919 SO libspdk_event_sock.so.5.0 00:02:02.919 SO libspdk_event_vmd.so.6.0 00:02:02.919 SO libspdk_event_iobuf.so.3.0 00:02:02.919 SYMLINK libspdk_event_keyring.so 00:02:02.919 SYMLINK libspdk_event_vhost_blk.so 00:02:02.919 SYMLINK libspdk_event_vfu_tgt.so 00:02:02.919 SYMLINK libspdk_event_scheduler.so 00:02:02.919 SYMLINK libspdk_event_sock.so 00:02:02.919 SYMLINK libspdk_event_vmd.so 00:02:02.919 SYMLINK libspdk_event_iobuf.so 00:02:03.176 CC module/event/subsystems/accel/accel.o 00:02:03.176 LIB libspdk_event_accel.a 00:02:03.433 SO libspdk_event_accel.so.6.0 00:02:03.433 SYMLINK libspdk_event_accel.so 00:02:03.433 CC module/event/subsystems/bdev/bdev.o 00:02:03.690 LIB libspdk_event_bdev.a 00:02:03.690 SO libspdk_event_bdev.so.6.0 00:02:03.690 SYMLINK libspdk_event_bdev.so 00:02:03.947 CC module/event/subsystems/ublk/ublk.o 00:02:03.947 CC module/event/subsystems/nbd/nbd.o 00:02:03.947 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.947 CC module/event/subsystems/scsi/scsi.o 00:02:03.947 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:04.206 LIB libspdk_event_nbd.a 00:02:04.206 LIB libspdk_event_ublk.a 00:02:04.206 LIB libspdk_event_scsi.a 00:02:04.206 SO libspdk_event_nbd.so.6.0 00:02:04.206 SO libspdk_event_ublk.so.3.0 00:02:04.206 SO libspdk_event_scsi.so.6.0 00:02:04.206 SYMLINK libspdk_event_nbd.so 00:02:04.206 SYMLINK libspdk_event_ublk.so 00:02:04.206 SYMLINK libspdk_event_scsi.so 00:02:04.206 LIB libspdk_event_nvmf.a 00:02:04.206 SO libspdk_event_nvmf.so.6.0 00:02:04.206 SYMLINK libspdk_event_nvmf.so 00:02:04.464 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:04.464 CC module/event/subsystems/iscsi/iscsi.o 00:02:04.464 LIB libspdk_event_vhost_scsi.a 00:02:04.464 LIB libspdk_event_iscsi.a 00:02:04.464 SO libspdk_event_vhost_scsi.so.3.0 00:02:04.464 SO libspdk_event_iscsi.so.6.0 00:02:04.464 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.720 SYMLINK libspdk_event_iscsi.so 00:02:04.720 SO libspdk.so.6.0 00:02:04.720 SYMLINK libspdk.so 00:02:04.980 CC app/trace_record/trace_record.o 00:02:04.980 CXX app/trace/trace.o 00:02:04.980 CC app/spdk_nvme_perf/perf.o 00:02:04.980 CC app/spdk_nvme_identify/identify.o 00:02:04.980 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.980 CC test/rpc_client/rpc_client_test.o 00:02:04.980 CC app/spdk_top/spdk_top.o 00:02:04.980 TEST_HEADER include/spdk/accel.h 00:02:04.980 TEST_HEADER include/spdk/accel_module.h 00:02:04.980 TEST_HEADER include/spdk/assert.h 00:02:04.980 TEST_HEADER include/spdk/barrier.h 00:02:04.980 TEST_HEADER include/spdk/base64.h 00:02:04.980 TEST_HEADER include/spdk/bdev.h 00:02:04.980 TEST_HEADER include/spdk/bdev_module.h 00:02:04.980 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.980 CC app/spdk_lspci/spdk_lspci.o 00:02:04.980 TEST_HEADER include/spdk/bit_array.h 00:02:04.980 TEST_HEADER include/spdk/bit_pool.h 00:02:04.980 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.980 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.980 TEST_HEADER include/spdk/blobfs.h 00:02:04.980 TEST_HEADER include/spdk/blob.h 00:02:04.980 TEST_HEADER include/spdk/conf.h 00:02:04.980 TEST_HEADER include/spdk/config.h 00:02:04.980 TEST_HEADER include/spdk/cpuset.h 00:02:04.980 TEST_HEADER include/spdk/crc16.h 00:02:04.980 TEST_HEADER include/spdk/crc32.h 00:02:04.980 TEST_HEADER include/spdk/crc64.h 00:02:04.980 TEST_HEADER include/spdk/dif.h 00:02:04.980 TEST_HEADER include/spdk/dma.h 00:02:04.980 TEST_HEADER include/spdk/endian.h 00:02:04.980 TEST_HEADER include/spdk/env.h 00:02:04.980 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.980 TEST_HEADER include/spdk/event.h 00:02:04.980 TEST_HEADER include/spdk/fd_group.h 00:02:04.980 TEST_HEADER include/spdk/fd.h 00:02:04.980 TEST_HEADER include/spdk/file.h 00:02:04.980 TEST_HEADER include/spdk/ftl.h 00:02:04.980 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.980 TEST_HEADER include/spdk/hexlify.h 00:02:04.980 TEST_HEADER include/spdk/histogram_data.h 00:02:04.980 TEST_HEADER include/spdk/idxd.h 00:02:04.980 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.980 TEST_HEADER include/spdk/init.h 00:02:04.980 TEST_HEADER include/spdk/ioat.h 00:02:04.980 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.980 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.980 TEST_HEADER include/spdk/json.h 00:02:04.980 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.980 TEST_HEADER include/spdk/keyring.h 00:02:04.980 TEST_HEADER include/spdk/keyring_module.h 00:02:04.980 TEST_HEADER include/spdk/likely.h 00:02:04.980 TEST_HEADER include/spdk/log.h 00:02:04.980 TEST_HEADER include/spdk/lvol.h 00:02:04.980 TEST_HEADER include/spdk/memory.h 00:02:04.980 TEST_HEADER include/spdk/mmio.h 00:02:04.980 TEST_HEADER include/spdk/notify.h 00:02:04.980 TEST_HEADER include/spdk/nbd.h 00:02:04.980 TEST_HEADER include/spdk/nvme.h 00:02:04.980 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.980 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.980 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.980 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.980 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.980 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.980 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.980 TEST_HEADER include/spdk/nvmf.h 00:02:04.980 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.980 TEST_HEADER include/spdk/opal.h 00:02:04.980 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.980 TEST_HEADER include/spdk/opal_spec.h 00:02:04.980 TEST_HEADER include/spdk/pci_ids.h 00:02:04.980 TEST_HEADER include/spdk/pipe.h 00:02:04.980 TEST_HEADER include/spdk/queue.h 00:02:04.980 TEST_HEADER include/spdk/reduce.h 00:02:04.980 TEST_HEADER include/spdk/rpc.h 00:02:04.980 TEST_HEADER include/spdk/scheduler.h 00:02:04.980 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.980 TEST_HEADER include/spdk/scsi.h 00:02:04.980 TEST_HEADER include/spdk/sock.h 00:02:04.980 TEST_HEADER include/spdk/stdinc.h 00:02:04.980 TEST_HEADER include/spdk/string.h 00:02:04.980 TEST_HEADER include/spdk/thread.h 00:02:04.980 TEST_HEADER include/spdk/trace_parser.h 00:02:04.980 TEST_HEADER include/spdk/trace.h 00:02:04.980 TEST_HEADER include/spdk/ublk.h 00:02:04.980 TEST_HEADER include/spdk/tree.h 00:02:04.980 TEST_HEADER include/spdk/util.h 00:02:04.980 TEST_HEADER include/spdk/uuid.h 00:02:04.980 TEST_HEADER include/spdk/version.h 00:02:04.980 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.980 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.980 TEST_HEADER include/spdk/vhost.h 00:02:04.980 TEST_HEADER include/spdk/vmd.h 00:02:04.980 TEST_HEADER include/spdk/xor.h 00:02:04.980 TEST_HEADER include/spdk/zipf.h 00:02:04.980 CC app/spdk_dd/spdk_dd.o 00:02:04.980 CXX test/cpp_headers/accel.o 00:02:04.980 CXX test/cpp_headers/accel_module.o 00:02:04.980 CXX test/cpp_headers/assert.o 00:02:04.980 CXX test/cpp_headers/barrier.o 00:02:04.980 CXX test/cpp_headers/base64.o 00:02:04.980 CXX test/cpp_headers/bdev.o 00:02:04.980 CXX test/cpp_headers/bdev_module.o 00:02:04.980 CXX test/cpp_headers/bdev_zone.o 00:02:04.980 CXX test/cpp_headers/bit_array.o 00:02:04.980 CXX test/cpp_headers/bit_pool.o 00:02:04.981 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.981 CXX test/cpp_headers/blob_bdev.o 00:02:04.981 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.981 CXX test/cpp_headers/blobfs.o 00:02:04.981 CXX test/cpp_headers/blob.o 00:02:04.981 CXX test/cpp_headers/conf.o 00:02:04.981 CXX test/cpp_headers/config.o 00:02:04.981 CXX test/cpp_headers/cpuset.o 00:02:04.981 CXX test/cpp_headers/crc16.o 00:02:04.981 CC app/nvmf_tgt/nvmf_main.o 00:02:04.981 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.981 CXX test/cpp_headers/crc32.o 00:02:04.981 CC examples/util/zipf/zipf.o 00:02:04.981 CC test/app/jsoncat/jsoncat.o 00:02:04.981 CC test/app/histogram_perf/histogram_perf.o 00:02:04.981 CC examples/ioat/verify/verify.o 00:02:04.981 CC test/thread/poller_perf/poller_perf.o 00:02:04.981 CC test/app/stub/stub.o 00:02:04.981 CC examples/ioat/perf/perf.o 00:02:04.981 CC app/fio/nvme/fio_plugin.o 00:02:04.981 CC test/env/pci/pci_ut.o 00:02:04.981 CC test/env/vtophys/vtophys.o 00:02:04.981 CC app/spdk_tgt/spdk_tgt.o 00:02:04.981 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:04.981 CC test/env/memory/memory_ut.o 00:02:05.245 CC test/dma/test_dma/test_dma.o 00:02:05.245 CC app/fio/bdev/fio_plugin.o 00:02:05.245 CC test/app/bdev_svc/bdev_svc.o 00:02:05.245 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.245 LINK spdk_lspci 00:02:05.245 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.245 LINK rpc_client_test 00:02:05.245 LINK spdk_nvme_discover 00:02:05.245 LINK jsoncat 00:02:05.245 LINK histogram_perf 00:02:05.245 LINK zipf 00:02:05.513 CXX test/cpp_headers/crc64.o 00:02:05.513 LINK poller_perf 00:02:05.513 CXX test/cpp_headers/dif.o 00:02:05.513 CXX test/cpp_headers/dma.o 00:02:05.513 LINK vtophys 00:02:05.513 CXX test/cpp_headers/endian.o 00:02:05.513 LINK env_dpdk_post_init 00:02:05.513 LINK interrupt_tgt 00:02:05.513 LINK spdk_trace_record 00:02:05.513 CXX test/cpp_headers/env_dpdk.o 00:02:05.513 CXX test/cpp_headers/env.o 00:02:05.513 CXX test/cpp_headers/event.o 00:02:05.513 LINK nvmf_tgt 00:02:05.513 CXX test/cpp_headers/fd_group.o 00:02:05.513 CXX test/cpp_headers/fd.o 00:02:05.513 CXX test/cpp_headers/file.o 00:02:05.513 CXX test/cpp_headers/ftl.o 00:02:05.513 LINK stub 00:02:05.513 LINK iscsi_tgt 00:02:05.513 LINK verify 00:02:05.513 CXX test/cpp_headers/gpt_spec.o 00:02:05.513 CXX test/cpp_headers/hexlify.o 00:02:05.513 CXX test/cpp_headers/histogram_data.o 00:02:05.513 CXX test/cpp_headers/idxd.o 00:02:05.513 CXX test/cpp_headers/idxd_spec.o 00:02:05.513 LINK ioat_perf 00:02:05.513 LINK spdk_tgt 00:02:05.513 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.513 LINK bdev_svc 00:02:05.513 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.513 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:05.774 CXX test/cpp_headers/init.o 00:02:05.774 CXX test/cpp_headers/ioat.o 00:02:05.774 CXX test/cpp_headers/ioat_spec.o 00:02:05.774 LINK spdk_dd 00:02:05.774 CXX test/cpp_headers/iscsi_spec.o 00:02:05.774 CXX test/cpp_headers/json.o 00:02:05.774 CXX test/cpp_headers/jsonrpc.o 00:02:05.774 CXX test/cpp_headers/keyring.o 00:02:05.774 CXX test/cpp_headers/keyring_module.o 00:02:05.774 CXX test/cpp_headers/likely.o 00:02:05.774 CXX test/cpp_headers/log.o 00:02:05.774 CXX test/cpp_headers/lvol.o 00:02:05.774 CXX test/cpp_headers/memory.o 00:02:05.774 CXX test/cpp_headers/mmio.o 00:02:05.774 CXX test/cpp_headers/nbd.o 00:02:05.774 CXX test/cpp_headers/notify.o 00:02:05.774 CXX test/cpp_headers/nvme.o 00:02:05.774 CXX test/cpp_headers/nvme_intel.o 00:02:05.774 LINK pci_ut 00:02:05.774 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.774 CXX test/cpp_headers/nvme_spec.o 00:02:05.774 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.774 LINK spdk_trace 00:02:05.774 CXX test/cpp_headers/nvme_zns.o 00:02:05.774 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.774 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.774 CXX test/cpp_headers/nvmf.o 00:02:05.774 CXX test/cpp_headers/nvmf_spec.o 00:02:05.774 CXX test/cpp_headers/nvmf_transport.o 00:02:05.774 LINK test_dma 00:02:06.034 CXX test/cpp_headers/opal.o 00:02:06.034 CXX test/cpp_headers/opal_spec.o 00:02:06.034 CXX test/cpp_headers/pci_ids.o 00:02:06.034 CXX test/cpp_headers/pipe.o 00:02:06.034 LINK nvme_fuzz 00:02:06.034 CC test/event/event_perf/event_perf.o 00:02:06.034 CXX test/cpp_headers/queue.o 00:02:06.034 CXX test/cpp_headers/reduce.o 00:02:06.034 CC examples/idxd/perf/perf.o 00:02:06.034 CC examples/sock/hello_world/hello_sock.o 00:02:06.034 LINK spdk_nvme 00:02:06.034 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.034 LINK spdk_bdev 00:02:06.295 CXX test/cpp_headers/rpc.o 00:02:06.295 CC test/event/reactor/reactor.o 00:02:06.295 CXX test/cpp_headers/scheduler.o 00:02:06.295 CXX test/cpp_headers/scsi.o 00:02:06.295 CC test/event/reactor_perf/reactor_perf.o 00:02:06.295 CXX test/cpp_headers/scsi_spec.o 00:02:06.295 CC examples/thread/thread/thread_ex.o 00:02:06.295 CXX test/cpp_headers/sock.o 00:02:06.295 CXX test/cpp_headers/stdinc.o 00:02:06.295 CXX test/cpp_headers/string.o 00:02:06.295 CXX test/cpp_headers/thread.o 00:02:06.295 CC test/event/app_repeat/app_repeat.o 00:02:06.295 CC examples/vmd/led/led.o 00:02:06.295 CXX test/cpp_headers/trace.o 00:02:06.295 CXX test/cpp_headers/trace_parser.o 00:02:06.295 CXX test/cpp_headers/tree.o 00:02:06.295 CXX test/cpp_headers/ublk.o 00:02:06.295 CXX test/cpp_headers/util.o 00:02:06.295 CXX test/cpp_headers/uuid.o 00:02:06.295 CXX test/cpp_headers/version.o 00:02:06.295 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.295 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.295 CC test/event/scheduler/scheduler.o 00:02:06.295 CXX test/cpp_headers/vhost.o 00:02:06.295 CXX test/cpp_headers/vmd.o 00:02:06.295 CXX test/cpp_headers/xor.o 00:02:06.295 LINK vhost_fuzz 00:02:06.295 CXX test/cpp_headers/zipf.o 00:02:06.295 LINK spdk_nvme_perf 00:02:06.557 LINK lsvmd 00:02:06.557 LINK event_perf 00:02:06.557 LINK mem_callbacks 00:02:06.557 LINK reactor 00:02:06.557 LINK reactor_perf 00:02:06.557 CC app/vhost/vhost.o 00:02:06.557 LINK led 00:02:06.557 LINK spdk_top 00:02:06.557 LINK spdk_nvme_identify 00:02:06.557 LINK app_repeat 00:02:06.557 LINK hello_sock 00:02:06.557 CC test/nvme/startup/startup.o 00:02:06.557 CC test/nvme/overhead/overhead.o 00:02:06.557 CC test/nvme/reset/reset.o 00:02:06.557 CC test/nvme/e2edp/nvme_dp.o 00:02:06.557 CC test/nvme/err_injection/err_injection.o 00:02:06.557 CC test/nvme/sgl/sgl.o 00:02:06.557 CC test/nvme/reserve/reserve.o 00:02:06.557 CC test/nvme/aer/aer.o 00:02:06.557 CC test/nvme/simple_copy/simple_copy.o 00:02:06.816 CC test/blobfs/mkfs/mkfs.o 00:02:06.816 CC test/accel/dif/dif.o 00:02:06.816 CC test/nvme/connect_stress/connect_stress.o 00:02:06.816 CC test/nvme/boot_partition/boot_partition.o 00:02:06.816 LINK thread 00:02:06.816 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.816 CC test/nvme/compliance/nvme_compliance.o 00:02:06.816 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.816 CC test/nvme/cuse/cuse.o 00:02:06.816 CC test/nvme/fdp/fdp.o 00:02:06.816 CC test/lvol/esnap/esnap.o 00:02:06.816 LINK scheduler 00:02:06.816 LINK idxd_perf 00:02:06.816 LINK vhost 00:02:06.816 LINK boot_partition 00:02:06.816 LINK err_injection 00:02:07.075 LINK startup 00:02:07.075 LINK simple_copy 00:02:07.075 LINK doorbell_aers 00:02:07.075 LINK fused_ordering 00:02:07.075 LINK connect_stress 00:02:07.075 LINK reset 00:02:07.075 LINK sgl 00:02:07.075 LINK nvme_dp 00:02:07.075 LINK reserve 00:02:07.075 LINK mkfs 00:02:07.075 LINK aer 00:02:07.075 LINK memory_ut 00:02:07.075 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:07.075 CC examples/nvme/hotplug/hotplug.o 00:02:07.075 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.075 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:07.075 CC examples/nvme/arbitration/arbitration.o 00:02:07.075 CC examples/nvme/hello_world/hello_world.o 00:02:07.075 CC examples/nvme/abort/abort.o 00:02:07.075 CC examples/nvme/reconnect/reconnect.o 00:02:07.075 LINK overhead 00:02:07.075 LINK nvme_compliance 00:02:07.334 CC examples/accel/perf/accel_perf.o 00:02:07.334 CC examples/blob/hello_world/hello_blob.o 00:02:07.334 CC examples/blob/cli/blobcli.o 00:02:07.334 LINK cmb_copy 00:02:07.334 LINK dif 00:02:07.334 LINK hello_world 00:02:07.334 LINK pmr_persistence 00:02:07.334 LINK fdp 00:02:07.599 LINK hotplug 00:02:07.599 LINK hello_blob 00:02:07.599 LINK reconnect 00:02:07.599 LINK abort 00:02:07.599 LINK arbitration 00:02:07.599 LINK nvme_manage 00:02:07.856 LINK accel_perf 00:02:07.856 CC test/bdev/bdevio/bdevio.o 00:02:07.856 LINK blobcli 00:02:07.856 LINK iscsi_fuzz 00:02:08.122 CC examples/bdev/hello_world/hello_bdev.o 00:02:08.122 CC examples/bdev/bdevperf/bdevperf.o 00:02:08.122 LINK bdevio 00:02:08.437 LINK hello_bdev 00:02:08.437 LINK cuse 00:02:09.026 LINK bdevperf 00:02:09.284 CC examples/nvmf/nvmf/nvmf.o 00:02:09.542 LINK nvmf 00:02:12.069 LINK esnap 00:02:12.069 00:02:12.069 real 0m48.734s 00:02:12.069 user 10m7.768s 00:02:12.069 sys 2m27.922s 00:02:12.069 12:41:30 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:12.069 12:41:30 make -- common/autotest_common.sh@10 -- $ set +x 00:02:12.069 ************************************ 00:02:12.069 END TEST make 00:02:12.069 ************************************ 00:02:12.069 12:41:30 -- common/autotest_common.sh@1142 -- $ return 0 00:02:12.069 12:41:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:12.069 12:41:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:12.069 12:41:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:12.069 12:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.069 12:41:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:12.069 12:41:30 -- pm/common@44 -- $ pid=3181805 00:02:12.069 12:41:30 -- pm/common@50 -- $ kill -TERM 3181805 00:02:12.069 12:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.069 12:41:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:12.069 12:41:30 -- pm/common@44 -- $ pid=3181806 00:02:12.069 12:41:30 -- pm/common@50 -- $ kill -TERM 3181806 00:02:12.069 12:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.069 12:41:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:12.069 12:41:30 -- pm/common@44 -- $ pid=3181809 00:02:12.069 12:41:30 -- pm/common@50 -- $ kill -TERM 3181809 00:02:12.069 12:41:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.069 12:41:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:12.069 12:41:30 -- pm/common@44 -- $ pid=3181837 00:02:12.069 12:41:30 -- pm/common@50 -- $ sudo -E kill -TERM 3181837 00:02:12.328 12:41:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.328 12:41:30 -- nvmf/common.sh@7 -- # uname -s 00:02:12.328 12:41:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.328 12:41:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.328 12:41:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.328 12:41:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.328 12:41:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.328 12:41:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.328 12:41:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.328 12:41:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.328 12:41:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.328 12:41:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.328 12:41:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:02:12.328 12:41:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:02:12.328 12:41:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.328 12:41:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.328 12:41:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.328 12:41:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:12.328 12:41:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.328 12:41:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.328 12:41:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.328 12:41:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.328 12:41:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.328 12:41:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.328 12:41:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.328 12:41:30 -- paths/export.sh@5 -- # export PATH 00:02:12.328 12:41:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.328 12:41:30 -- nvmf/common.sh@47 -- # : 0 00:02:12.328 12:41:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:12.328 12:41:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:12.328 12:41:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:12.328 12:41:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.328 12:41:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.328 12:41:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:12.328 12:41:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:12.328 12:41:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:12.328 12:41:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.328 12:41:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.328 12:41:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.328 12:41:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.328 12:41:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.328 12:41:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.328 12:41:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.328 12:41:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.328 12:41:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.328 12:41:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.328 12:41:30 -- spdk/autotest.sh@48 -- # udevadm_pid=3237263 00:02:12.328 12:41:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.328 12:41:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.328 12:41:30 -- pm/common@17 -- # local monitor 00:02:12.328 12:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.328 12:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.328 12:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.328 12:41:30 -- pm/common@21 -- # date +%s 00:02:12.328 12:41:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.328 12:41:30 -- pm/common@21 -- # date +%s 00:02:12.328 12:41:30 -- pm/common@25 -- # sleep 1 00:02:12.329 12:41:30 -- pm/common@21 -- # date +%s 00:02:12.329 12:41:30 -- pm/common@21 -- # date +%s 00:02:12.329 12:41:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040090 00:02:12.329 12:41:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040090 00:02:12.329 12:41:30 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040090 00:02:12.329 12:41:30 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721040090 00:02:12.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040090_collect-vmstat.pm.log 00:02:12.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040090_collect-cpu-load.pm.log 00:02:12.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040090_collect-cpu-temp.pm.log 00:02:12.329 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721040090_collect-bmc-pm.bmc.pm.log 00:02:13.266 12:41:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.266 12:41:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.266 12:41:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:13.266 12:41:31 -- common/autotest_common.sh@10 -- # set +x 00:02:13.266 12:41:31 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.266 12:41:31 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:13.266 12:41:31 -- common/autotest_common.sh@10 -- # set +x 00:02:13.266 12:41:31 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:13.266 12:41:31 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.266 12:41:31 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.266 12:41:31 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.266 12:41:31 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.266 12:41:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.266 12:41:31 -- common/autotest_common.sh@1455 -- # uname 00:02:13.266 12:41:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:13.266 12:41:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.266 12:41:31 -- common/autotest_common.sh@1475 -- # uname 00:02:13.266 12:41:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:13.266 12:41:31 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.266 12:41:31 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.266 12:41:31 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.266 12:41:31 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.266 12:41:31 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:13.266 --rc lcov_branch_coverage=1 00:02:13.266 --rc lcov_function_coverage=1 00:02:13.266 --rc genhtml_branch_coverage=1 00:02:13.266 --rc genhtml_function_coverage=1 00:02:13.266 --rc genhtml_legend=1 00:02:13.266 --rc geninfo_all_blocks=1 00:02:13.266 ' 00:02:13.266 12:41:31 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:13.266 --rc lcov_branch_coverage=1 00:02:13.266 --rc lcov_function_coverage=1 00:02:13.266 --rc genhtml_branch_coverage=1 00:02:13.266 --rc genhtml_function_coverage=1 00:02:13.266 --rc genhtml_legend=1 00:02:13.266 --rc geninfo_all_blocks=1 00:02:13.266 ' 00:02:13.266 12:41:31 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:13.266 --rc lcov_branch_coverage=1 00:02:13.266 --rc lcov_function_coverage=1 00:02:13.266 --rc genhtml_branch_coverage=1 00:02:13.266 --rc genhtml_function_coverage=1 00:02:13.266 --rc genhtml_legend=1 00:02:13.266 --rc geninfo_all_blocks=1 00:02:13.266 --no-external' 00:02:13.266 12:41:31 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:13.266 --rc lcov_branch_coverage=1 00:02:13.266 --rc lcov_function_coverage=1 00:02:13.266 --rc genhtml_branch_coverage=1 00:02:13.266 --rc genhtml_function_coverage=1 00:02:13.266 --rc genhtml_legend=1 00:02:13.266 --rc geninfo_all_blocks=1 00:02:13.266 --no-external' 00:02:13.266 12:41:31 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.266 lcov: LCOV version 1.14 00:02:13.266 12:41:31 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:15.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:15.166 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:15.167 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:15.167 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:30.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:30.034 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:48.107 12:42:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:48.107 12:42:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:48.107 12:42:04 -- common/autotest_common.sh@10 -- # set +x 00:02:48.107 12:42:04 -- spdk/autotest.sh@91 -- # rm -f 00:02:48.107 12:42:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.107 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:02:48.107 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.107 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.107 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.107 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.107 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.107 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.107 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.107 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.107 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.107 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.107 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.107 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.107 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.107 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.107 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.107 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.365 12:42:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:48.365 12:42:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:48.365 12:42:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:48.365 12:42:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:48.365 12:42:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:48.365 12:42:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:48.365 12:42:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:48.365 12:42:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.365 12:42:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:48.365 12:42:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:48.365 12:42:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.365 12:42:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:48.365 12:42:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:48.365 12:42:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:48.365 12:42:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.365 No valid GPT data, bailing 00:02:48.365 12:42:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.365 12:42:06 -- scripts/common.sh@391 -- # pt= 00:02:48.365 12:42:06 -- scripts/common.sh@392 -- # return 1 00:02:48.365 12:42:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.365 1+0 records in 00:02:48.365 1+0 records out 00:02:48.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00235179 s, 446 MB/s 00:02:48.365 12:42:06 -- spdk/autotest.sh@118 -- # sync 00:02:48.365 12:42:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.365 12:42:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.365 12:42:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.265 12:42:08 -- spdk/autotest.sh@124 -- # uname -s 00:02:50.265 12:42:08 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:50.265 12:42:08 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.265 12:42:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.265 12:42:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.265 12:42:08 -- common/autotest_common.sh@10 -- # set +x 00:02:50.265 ************************************ 00:02:50.265 START TEST setup.sh 00:02:50.265 ************************************ 00:02:50.265 12:42:08 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.265 * Looking for test storage... 00:02:50.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.265 12:42:08 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.265 12:42:08 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.265 12:42:08 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.265 12:42:08 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.265 12:42:08 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.265 12:42:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.265 ************************************ 00:02:50.265 START TEST acl 00:02:50.265 ************************************ 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.265 * Looking for test storage... 00:02:50.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.265 12:42:08 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.265 12:42:08 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.265 12:42:08 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.265 12:42:08 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.641 12:42:09 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:51.641 12:42:09 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:51.641 12:42:09 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:51.641 12:42:09 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.641 12:42:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.641 12:42:09 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:53.021 Hugepages 00:02:53.021 node hugesize free / total 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 00:02:53.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.021 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:53.022 12:42:11 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:53.022 12:42:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.022 12:42:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.022 12:42:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.022 ************************************ 00:02:53.022 START TEST denied 00:02:53.022 ************************************ 00:02:53.022 12:42:11 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:53.022 12:42:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:02:53.022 12:42:11 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:53.022 12:42:11 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:02:53.022 12:42:11 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.022 12:42:11 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.927 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.927 12:42:12 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.473 00:02:57.473 real 0m4.060s 00:02:57.473 user 0m1.155s 00:02:57.473 sys 0m1.963s 00:02:57.473 12:42:15 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.473 12:42:15 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:57.473 ************************************ 00:02:57.473 END TEST denied 00:02:57.473 ************************************ 00:02:57.473 12:42:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:02:57.473 12:42:15 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:57.473 12:42:15 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.473 12:42:15 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.473 12:42:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.473 ************************************ 00:02:57.473 START TEST allowed 00:02:57.473 ************************************ 00:02:57.473 12:42:15 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:57.473 12:42:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:02:57.474 12:42:15 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:57.474 12:42:15 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:02:57.474 12:42:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.474 12:42:15 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.010 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.010 12:42:17 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:00.010 12:42:17 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:00.010 12:42:17 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:00.010 12:42:17 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.010 12:42:17 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.417 00:03:01.417 real 0m4.138s 00:03:01.417 user 0m1.112s 00:03:01.417 sys 0m1.853s 00:03:01.417 12:42:19 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.417 12:42:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:01.417 ************************************ 00:03:01.417 END TEST allowed 00:03:01.417 ************************************ 00:03:01.417 12:42:19 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:01.417 00:03:01.417 real 0m11.111s 00:03:01.417 user 0m3.421s 00:03:01.417 sys 0m5.671s 00:03:01.417 12:42:19 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:01.417 12:42:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:01.417 ************************************ 00:03:01.417 END TEST acl 00:03:01.417 ************************************ 00:03:01.417 12:42:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:01.417 12:42:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.417 12:42:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.417 12:42:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.417 12:42:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.417 ************************************ 00:03:01.417 START TEST hugepages 00:03:01.417 ************************************ 00:03:01.417 12:42:19 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:01.417 * Looking for test storage... 00:03:01.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 27436376 kB' 'MemAvailable: 31006516 kB' 'Buffers: 2704 kB' 'Cached: 9926784 kB' 'SwapCached: 0 kB' 'Active: 6941136 kB' 'Inactive: 3505248 kB' 'Active(anon): 6551596 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520004 kB' 'Mapped: 191204 kB' 'Shmem: 6034700 kB' 'KReclaimable: 174352 kB' 'Slab: 514116 kB' 'SReclaimable: 174352 kB' 'SUnreclaim: 339764 kB' 'KernelStack: 12272 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7682132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195472 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.417 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.418 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:01.419 12:42:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:01.419 12:42:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.419 12:42:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.419 12:42:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.419 ************************************ 00:03:01.419 START TEST default_setup 00:03:01.419 ************************************ 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.419 12:42:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.793 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.793 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:02.793 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.793 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.793 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.793 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:03.052 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:03.052 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:03.052 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:03.052 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:03.998 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29579532 kB' 'MemAvailable: 33149748 kB' 'Buffers: 2704 kB' 'Cached: 9926876 kB' 'SwapCached: 0 kB' 'Active: 6962044 kB' 'Inactive: 3505248 kB' 'Active(anon): 6572504 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541012 kB' 'Mapped: 191136 kB' 'Shmem: 6034792 kB' 'KReclaimable: 174504 kB' 'Slab: 513832 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339328 kB' 'KernelStack: 12656 kB' 'PageTables: 9468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7703320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.998 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29579556 kB' 'MemAvailable: 33149772 kB' 'Buffers: 2704 kB' 'Cached: 9926876 kB' 'SwapCached: 0 kB' 'Active: 6961260 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571720 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540200 kB' 'Mapped: 191116 kB' 'Shmem: 6034792 kB' 'KReclaimable: 174504 kB' 'Slab: 513832 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339328 kB' 'KernelStack: 12384 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7703340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.999 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.000 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29578848 kB' 'MemAvailable: 33149064 kB' 'Buffers: 2704 kB' 'Cached: 9926884 kB' 'SwapCached: 0 kB' 'Active: 6960912 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571372 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539924 kB' 'Mapped: 191184 kB' 'Shmem: 6034800 kB' 'KReclaimable: 174504 kB' 'Slab: 513792 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339288 kB' 'KernelStack: 12448 kB' 'PageTables: 8536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7703360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.001 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.002 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.003 nr_hugepages=1024 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.003 resv_hugepages=0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.003 surplus_hugepages=0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.003 anon_hugepages=0 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29580296 kB' 'MemAvailable: 33150512 kB' 'Buffers: 2704 kB' 'Cached: 9926920 kB' 'SwapCached: 0 kB' 'Active: 6960424 kB' 'Inactive: 3505248 kB' 'Active(anon): 6570884 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539376 kB' 'Mapped: 191184 kB' 'Shmem: 6034836 kB' 'KReclaimable: 174504 kB' 'Slab: 513912 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339408 kB' 'KernelStack: 12320 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7703384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.003 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.004 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20190876 kB' 'MemUsed: 4381480 kB' 'SwapCached: 0 kB' 'Active: 1620580 kB' 'Inactive: 72500 kB' 'Active(anon): 1491312 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383032 kB' 'Mapped: 85136 kB' 'AnonPages: 313224 kB' 'Shmem: 1181264 kB' 'KernelStack: 6808 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51720 kB' 'Slab: 209620 kB' 'SReclaimable: 51720 kB' 'SUnreclaim: 157900 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.005 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:04.006 node0=1024 expecting 1024 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:04.006 00:03:04.006 real 0m2.545s 00:03:04.006 user 0m0.692s 00:03:04.006 sys 0m0.980s 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.006 12:42:22 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:04.006 ************************************ 00:03:04.006 END TEST default_setup 00:03:04.006 ************************************ 00:03:04.006 12:42:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:04.006 12:42:22 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:04.006 12:42:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.006 12:42:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.006 12:42:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.265 ************************************ 00:03:04.266 START TEST per_node_1G_alloc 00:03:04.266 ************************************ 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.266 12:42:22 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.200 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.200 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.200 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.200 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.200 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.200 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.200 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.200 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.200 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.200 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.200 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.200 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.200 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.200 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.200 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.200 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.200 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29586692 kB' 'MemAvailable: 33156908 kB' 'Buffers: 2704 kB' 'Cached: 9927136 kB' 'SwapCached: 0 kB' 'Active: 6961700 kB' 'Inactive: 3505248 kB' 'Active(anon): 6572160 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539468 kB' 'Mapped: 191216 kB' 'Shmem: 6035052 kB' 'KReclaimable: 174504 kB' 'Slab: 513888 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339384 kB' 'KernelStack: 12304 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.464 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.465 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29590344 kB' 'MemAvailable: 33160560 kB' 'Buffers: 2704 kB' 'Cached: 9927140 kB' 'SwapCached: 0 kB' 'Active: 6964288 kB' 'Inactive: 3505248 kB' 'Active(anon): 6574748 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542560 kB' 'Mapped: 191660 kB' 'Shmem: 6035056 kB' 'KReclaimable: 174504 kB' 'Slab: 513880 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339376 kB' 'KernelStack: 12352 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7706932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.466 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.467 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29589600 kB' 'MemAvailable: 33159816 kB' 'Buffers: 2704 kB' 'Cached: 9927156 kB' 'SwapCached: 0 kB' 'Active: 6966288 kB' 'Inactive: 3505248 kB' 'Active(anon): 6576748 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544936 kB' 'Mapped: 191660 kB' 'Shmem: 6035072 kB' 'KReclaimable: 174504 kB' 'Slab: 513988 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339484 kB' 'KernelStack: 12352 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7709868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195540 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.468 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:05.469 nr_hugepages=1024 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.469 resv_hugepages=0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.469 surplus_hugepages=0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.469 anon_hugepages=0 00:03:05.469 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29588160 kB' 'MemAvailable: 33158376 kB' 'Buffers: 2704 kB' 'Cached: 9927180 kB' 'SwapCached: 0 kB' 'Active: 6961132 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571592 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539768 kB' 'Mapped: 191984 kB' 'Shmem: 6035096 kB' 'KReclaimable: 174504 kB' 'Slab: 513988 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339484 kB' 'KernelStack: 12320 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195536 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.470 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.471 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21257424 kB' 'MemUsed: 3314932 kB' 'SwapCached: 0 kB' 'Active: 1624524 kB' 'Inactive: 72500 kB' 'Active(anon): 1495256 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383088 kB' 'Mapped: 85140 kB' 'AnonPages: 317052 kB' 'Shmem: 1181320 kB' 'KernelStack: 6856 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51720 kB' 'Slab: 209752 kB' 'SReclaimable: 51720 kB' 'SUnreclaim: 158032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.472 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8326552 kB' 'MemUsed: 11127764 kB' 'SwapCached: 0 kB' 'Active: 5339932 kB' 'Inactive: 3432748 kB' 'Active(anon): 5079660 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8546924 kB' 'Mapped: 106916 kB' 'AnonPages: 225848 kB' 'Shmem: 4853904 kB' 'KernelStack: 5432 kB' 'PageTables: 3460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122784 kB' 'Slab: 304236 kB' 'SReclaimable: 122784 kB' 'SUnreclaim: 181452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.473 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.474 node0=512 expecting 512 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:05.474 node1=512 expecting 512 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:05.474 00:03:05.474 real 0m1.440s 00:03:05.474 user 0m0.612s 00:03:05.474 sys 0m0.803s 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:05.474 12:42:23 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:05.474 ************************************ 00:03:05.474 END TEST per_node_1G_alloc 00:03:05.474 ************************************ 00:03:05.734 12:42:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:05.734 12:42:23 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:05.734 12:42:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:05.734 12:42:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.734 12:42:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:05.734 ************************************ 00:03:05.734 START TEST even_2G_alloc 00:03:05.734 ************************************ 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.734 12:42:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.114 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.114 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.114 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.114 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.114 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.114 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.114 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.114 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.114 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.114 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.114 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.114 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.114 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.114 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.114 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.114 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.114 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.114 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29583400 kB' 'MemAvailable: 33153616 kB' 'Buffers: 2704 kB' 'Cached: 9927488 kB' 'SwapCached: 0 kB' 'Active: 6961700 kB' 'Inactive: 3505248 kB' 'Active(anon): 6572160 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540156 kB' 'Mapped: 191356 kB' 'Shmem: 6035404 kB' 'KReclaimable: 174504 kB' 'Slab: 514072 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339568 kB' 'KernelStack: 12368 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.115 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29583768 kB' 'MemAvailable: 33153984 kB' 'Buffers: 2704 kB' 'Cached: 9927488 kB' 'SwapCached: 0 kB' 'Active: 6962088 kB' 'Inactive: 3505248 kB' 'Active(anon): 6572548 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540104 kB' 'Mapped: 191356 kB' 'Shmem: 6035404 kB' 'KReclaimable: 174504 kB' 'Slab: 514080 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339576 kB' 'KernelStack: 12432 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.116 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.117 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29583288 kB' 'MemAvailable: 33153504 kB' 'Buffers: 2704 kB' 'Cached: 9927508 kB' 'SwapCached: 0 kB' 'Active: 6961648 kB' 'Inactive: 3505248 kB' 'Active(anon): 6572108 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540060 kB' 'Mapped: 191300 kB' 'Shmem: 6035424 kB' 'KReclaimable: 174504 kB' 'Slab: 514080 kB' 'SReclaimable: 174504 kB' 'SUnreclaim: 339576 kB' 'KernelStack: 12416 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.118 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.119 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.120 nr_hugepages=1024 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.120 resv_hugepages=0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.120 surplus_hugepages=0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.120 anon_hugepages=0 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29583328 kB' 'MemAvailable: 33153540 kB' 'Buffers: 2704 kB' 'Cached: 9927532 kB' 'SwapCached: 0 kB' 'Active: 6961284 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571744 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539636 kB' 'Mapped: 191224 kB' 'Shmem: 6035448 kB' 'KReclaimable: 174496 kB' 'Slab: 514060 kB' 'SReclaimable: 174496 kB' 'SUnreclaim: 339564 kB' 'KernelStack: 12400 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7704252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.120 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.121 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21263148 kB' 'MemUsed: 3309208 kB' 'SwapCached: 0 kB' 'Active: 1620260 kB' 'Inactive: 72500 kB' 'Active(anon): 1490992 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383116 kB' 'Mapped: 85144 kB' 'AnonPages: 312836 kB' 'Shmem: 1181348 kB' 'KernelStack: 6904 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51712 kB' 'Slab: 209768 kB' 'SReclaimable: 51712 kB' 'SUnreclaim: 158056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.122 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8320992 kB' 'MemUsed: 11133324 kB' 'SwapCached: 0 kB' 'Active: 5341040 kB' 'Inactive: 3432748 kB' 'Active(anon): 5080768 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8547160 kB' 'Mapped: 106080 kB' 'AnonPages: 226812 kB' 'Shmem: 4854140 kB' 'KernelStack: 5496 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122784 kB' 'Slab: 304292 kB' 'SReclaimable: 122784 kB' 'SUnreclaim: 181508 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.123 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:07.124 node0=512 expecting 512 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:07.124 node1=512 expecting 512 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:07.124 00:03:07.124 real 0m1.547s 00:03:07.124 user 0m0.627s 00:03:07.124 sys 0m0.897s 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.124 12:42:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.124 ************************************ 00:03:07.124 END TEST even_2G_alloc 00:03:07.124 ************************************ 00:03:07.124 12:42:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:07.124 12:42:25 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:07.124 12:42:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.124 12:42:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.124 12:42:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.124 ************************************ 00:03:07.124 START TEST odd_alloc 00:03:07.124 ************************************ 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:07.124 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.125 12:42:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.505 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.505 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.505 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.505 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.505 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.505 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.505 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.505 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.505 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.505 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.505 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.505 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.505 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.505 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.505 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.505 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.505 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29591328 kB' 'MemAvailable: 33161528 kB' 'Buffers: 2704 kB' 'Cached: 9927780 kB' 'SwapCached: 0 kB' 'Active: 6959744 kB' 'Inactive: 3505248 kB' 'Active(anon): 6570204 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537612 kB' 'Mapped: 190396 kB' 'Shmem: 6035696 kB' 'KReclaimable: 174472 kB' 'Slab: 514192 kB' 'SReclaimable: 174472 kB' 'SUnreclaim: 339720 kB' 'KernelStack: 12512 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7693488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.505 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29593024 kB' 'MemAvailable: 33163224 kB' 'Buffers: 2704 kB' 'Cached: 9927784 kB' 'SwapCached: 0 kB' 'Active: 6959084 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569544 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536964 kB' 'Mapped: 190384 kB' 'Shmem: 6035700 kB' 'KReclaimable: 174472 kB' 'Slab: 514192 kB' 'SReclaimable: 174472 kB' 'SUnreclaim: 339720 kB' 'KernelStack: 12384 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7691148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.506 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.507 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29593392 kB' 'MemAvailable: 33163592 kB' 'Buffers: 2704 kB' 'Cached: 9927800 kB' 'SwapCached: 0 kB' 'Active: 6958384 kB' 'Inactive: 3505248 kB' 'Active(anon): 6568844 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536268 kB' 'Mapped: 190384 kB' 'Shmem: 6035716 kB' 'KReclaimable: 174472 kB' 'Slab: 514308 kB' 'SReclaimable: 174472 kB' 'SUnreclaim: 339836 kB' 'KernelStack: 12352 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7691168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.508 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:08.509 nr_hugepages=1025 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.509 resv_hugepages=0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.509 surplus_hugepages=0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.509 anon_hugepages=0 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29593732 kB' 'MemAvailable: 33163932 kB' 'Buffers: 2704 kB' 'Cached: 9927824 kB' 'SwapCached: 0 kB' 'Active: 6958384 kB' 'Inactive: 3505248 kB' 'Active(anon): 6568844 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536296 kB' 'Mapped: 190384 kB' 'Shmem: 6035740 kB' 'KReclaimable: 174472 kB' 'Slab: 514316 kB' 'SReclaimable: 174472 kB' 'SUnreclaim: 339844 kB' 'KernelStack: 12368 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7691188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195712 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.509 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.510 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.531 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21256504 kB' 'MemUsed: 3315852 kB' 'SwapCached: 0 kB' 'Active: 1618312 kB' 'Inactive: 72500 kB' 'Active(anon): 1489044 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383112 kB' 'Mapped: 85144 kB' 'AnonPages: 310828 kB' 'Shmem: 1181344 kB' 'KernelStack: 6824 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51688 kB' 'Slab: 210004 kB' 'SReclaimable: 51688 kB' 'SUnreclaim: 158316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.532 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8336976 kB' 'MemUsed: 11117340 kB' 'SwapCached: 0 kB' 'Active: 5340104 kB' 'Inactive: 3432748 kB' 'Active(anon): 5079832 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8547440 kB' 'Mapped: 105240 kB' 'AnonPages: 225476 kB' 'Shmem: 4854420 kB' 'KernelStack: 5544 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122784 kB' 'Slab: 304312 kB' 'SReclaimable: 122784 kB' 'SUnreclaim: 181528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.533 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.534 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.792 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:08.793 node0=512 expecting 513 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:08.793 node1=513 expecting 512 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:08.793 00:03:08.793 real 0m1.433s 00:03:08.793 user 0m0.598s 00:03:08.793 sys 0m0.806s 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:08.793 12:42:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:08.793 ************************************ 00:03:08.793 END TEST odd_alloc 00:03:08.793 ************************************ 00:03:08.793 12:42:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:08.793 12:42:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:08.793 12:42:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:08.793 12:42:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.793 12:42:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:08.793 ************************************ 00:03:08.793 START TEST custom_alloc 00:03:08.793 ************************************ 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:08.793 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.794 12:42:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.177 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.177 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.177 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.177 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.177 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.177 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.177 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.177 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.177 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.177 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.177 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.177 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.177 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.177 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.177 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.177 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.177 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28530832 kB' 'MemAvailable: 32101028 kB' 'Buffers: 2704 kB' 'Cached: 9927912 kB' 'SwapCached: 0 kB' 'Active: 6964148 kB' 'Inactive: 3505248 kB' 'Active(anon): 6574608 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541952 kB' 'Mapped: 190824 kB' 'Shmem: 6035828 kB' 'KReclaimable: 174464 kB' 'Slab: 514140 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339676 kB' 'KernelStack: 12384 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7697376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195668 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.177 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28533864 kB' 'MemAvailable: 32104060 kB' 'Buffers: 2704 kB' 'Cached: 9927916 kB' 'SwapCached: 0 kB' 'Active: 6959204 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569664 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537004 kB' 'Mapped: 190872 kB' 'Shmem: 6035832 kB' 'KReclaimable: 174464 kB' 'Slab: 514124 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339660 kB' 'KernelStack: 12368 kB' 'PageTables: 7748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7692764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195680 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.178 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.179 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28535768 kB' 'MemAvailable: 32105964 kB' 'Buffers: 2704 kB' 'Cached: 9927928 kB' 'SwapCached: 0 kB' 'Active: 6962828 kB' 'Inactive: 3505248 kB' 'Active(anon): 6573288 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540636 kB' 'Mapped: 190768 kB' 'Shmem: 6035844 kB' 'KReclaimable: 174464 kB' 'Slab: 514196 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339732 kB' 'KernelStack: 12368 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7695584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.180 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.181 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:10.182 nr_hugepages=1536 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.182 resv_hugepages=0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.182 surplus_hugepages=0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.182 anon_hugepages=0 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28535428 kB' 'MemAvailable: 32105624 kB' 'Buffers: 2704 kB' 'Cached: 9927956 kB' 'SwapCached: 0 kB' 'Active: 6963860 kB' 'Inactive: 3505248 kB' 'Active(anon): 6574320 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 541632 kB' 'Mapped: 191220 kB' 'Shmem: 6035872 kB' 'KReclaimable: 174464 kB' 'Slab: 514188 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339724 kB' 'KernelStack: 12336 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7697204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195588 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.182 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.183 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21253720 kB' 'MemUsed: 3318636 kB' 'SwapCached: 0 kB' 'Active: 1618840 kB' 'Inactive: 72500 kB' 'Active(anon): 1489572 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383120 kB' 'Mapped: 85144 kB' 'AnonPages: 311348 kB' 'Shmem: 1181352 kB' 'KernelStack: 6824 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51680 kB' 'Slab: 209908 kB' 'SReclaimable: 51680 kB' 'SUnreclaim: 158228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.184 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.185 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7282144 kB' 'MemUsed: 12172172 kB' 'SwapCached: 0 kB' 'Active: 5339724 kB' 'Inactive: 3432748 kB' 'Active(anon): 5079452 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3432748 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8547580 kB' 'Mapped: 105240 kB' 'AnonPages: 225024 kB' 'Shmem: 4854560 kB' 'KernelStack: 5544 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122784 kB' 'Slab: 304280 kB' 'SReclaimable: 122784 kB' 'SUnreclaim: 181496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.186 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.187 node0=512 expecting 512 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:10.187 node1=1024 expecting 1024 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:10.187 00:03:10.187 real 0m1.519s 00:03:10.187 user 0m0.671s 00:03:10.187 sys 0m0.826s 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.187 12:42:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.187 ************************************ 00:03:10.187 END TEST custom_alloc 00:03:10.187 ************************************ 00:03:10.187 12:42:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:10.187 12:42:28 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:10.187 12:42:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.187 12:42:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.187 12:42:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.187 ************************************ 00:03:10.187 START TEST no_shrink_alloc 00:03:10.187 ************************************ 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.187 12:42:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.563 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.563 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.563 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.563 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.563 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.563 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.563 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.563 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.563 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.563 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.563 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.563 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.563 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.563 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.563 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.563 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.563 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29573200 kB' 'MemAvailable: 33143396 kB' 'Buffers: 2704 kB' 'Cached: 9928048 kB' 'SwapCached: 0 kB' 'Active: 6958884 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569344 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536580 kB' 'Mapped: 190544 kB' 'Shmem: 6035964 kB' 'KReclaimable: 174464 kB' 'Slab: 514092 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339628 kB' 'KernelStack: 12416 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7691852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.563 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.564 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29574180 kB' 'MemAvailable: 33144376 kB' 'Buffers: 2704 kB' 'Cached: 9928048 kB' 'SwapCached: 0 kB' 'Active: 6958956 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569416 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536652 kB' 'Mapped: 190480 kB' 'Shmem: 6035964 kB' 'KReclaimable: 174464 kB' 'Slab: 514060 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339596 kB' 'KernelStack: 12384 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7691868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.565 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.566 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29574468 kB' 'MemAvailable: 33144664 kB' 'Buffers: 2704 kB' 'Cached: 9928052 kB' 'SwapCached: 0 kB' 'Active: 6958556 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569016 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536252 kB' 'Mapped: 190404 kB' 'Shmem: 6035968 kB' 'KReclaimable: 174464 kB' 'Slab: 514056 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339592 kB' 'KernelStack: 12400 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7691892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.567 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.568 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.569 nr_hugepages=1024 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.569 resv_hugepages=0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.569 surplus_hugepages=0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.569 anon_hugepages=0 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29573964 kB' 'MemAvailable: 33144160 kB' 'Buffers: 2704 kB' 'Cached: 9928052 kB' 'SwapCached: 0 kB' 'Active: 6958880 kB' 'Inactive: 3505248 kB' 'Active(anon): 6569340 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536624 kB' 'Mapped: 190404 kB' 'Shmem: 6035968 kB' 'KReclaimable: 174464 kB' 'Slab: 514056 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339592 kB' 'KernelStack: 12416 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7691912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195664 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.569 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.570 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20203920 kB' 'MemUsed: 4368436 kB' 'SwapCached: 0 kB' 'Active: 1618900 kB' 'Inactive: 72500 kB' 'Active(anon): 1489632 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383124 kB' 'Mapped: 85144 kB' 'AnonPages: 311376 kB' 'Shmem: 1181356 kB' 'KernelStack: 6824 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51680 kB' 'Slab: 209812 kB' 'SReclaimable: 51680 kB' 'SUnreclaim: 158132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.571 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.572 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:11.573 node0=1024 expecting 1024 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.573 12:42:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.951 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.951 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.951 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.951 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.951 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.951 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.951 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.951 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.951 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.951 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.951 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.951 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.951 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.951 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.951 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.951 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.951 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.951 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.951 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29574092 kB' 'MemAvailable: 33144288 kB' 'Buffers: 2704 kB' 'Cached: 9928156 kB' 'SwapCached: 0 kB' 'Active: 6960964 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571424 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539056 kB' 'Mapped: 190472 kB' 'Shmem: 6036072 kB' 'KReclaimable: 174464 kB' 'Slab: 513892 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339428 kB' 'KernelStack: 12512 kB' 'PageTables: 8912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7694452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.952 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.953 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29583428 kB' 'MemAvailable: 33153624 kB' 'Buffers: 2704 kB' 'Cached: 9928156 kB' 'SwapCached: 0 kB' 'Active: 6959848 kB' 'Inactive: 3505248 kB' 'Active(anon): 6570308 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537792 kB' 'Mapped: 190420 kB' 'Shmem: 6036072 kB' 'KReclaimable: 174464 kB' 'Slab: 513884 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339420 kB' 'KernelStack: 12704 kB' 'PageTables: 8108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7693120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.954 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.955 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.956 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29584012 kB' 'MemAvailable: 33154208 kB' 'Buffers: 2704 kB' 'Cached: 9928180 kB' 'SwapCached: 0 kB' 'Active: 6960920 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571380 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538384 kB' 'Mapped: 190420 kB' 'Shmem: 6036096 kB' 'KReclaimable: 174464 kB' 'Slab: 513884 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339420 kB' 'KernelStack: 12768 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7694492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.957 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.958 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.223 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.223 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.223 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.224 nr_hugepages=1024 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.224 resv_hugepages=0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.224 surplus_hugepages=0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.224 anon_hugepages=0 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29584448 kB' 'MemAvailable: 33154644 kB' 'Buffers: 2704 kB' 'Cached: 9928200 kB' 'SwapCached: 0 kB' 'Active: 6960992 kB' 'Inactive: 3505248 kB' 'Active(anon): 6571452 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505248 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538476 kB' 'Mapped: 190420 kB' 'Shmem: 6036116 kB' 'KReclaimable: 174464 kB' 'Slab: 513884 kB' 'SReclaimable: 174464 kB' 'SUnreclaim: 339420 kB' 'KernelStack: 12512 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7692152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 35904 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1603164 kB' 'DirectMap2M: 16142336 kB' 'DirectMap1G: 34603008 kB' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.224 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.225 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20220928 kB' 'MemUsed: 4351428 kB' 'SwapCached: 0 kB' 'Active: 1618268 kB' 'Inactive: 72500 kB' 'Active(anon): 1489000 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1383124 kB' 'Mapped: 85144 kB' 'AnonPages: 310744 kB' 'Shmem: 1181356 kB' 'KernelStack: 6744 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 51680 kB' 'Slab: 209768 kB' 'SReclaimable: 51680 kB' 'SUnreclaim: 158088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.226 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:13.227 node0=1024 expecting 1024 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:13.227 00:03:13.227 real 0m2.879s 00:03:13.227 user 0m1.173s 00:03:13.227 sys 0m1.641s 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.227 12:42:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.227 ************************************ 00:03:13.227 END TEST no_shrink_alloc 00:03:13.227 ************************************ 00:03:13.227 12:42:31 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:13.227 12:42:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:13.227 00:03:13.227 real 0m11.748s 00:03:13.227 user 0m4.541s 00:03:13.227 sys 0m6.192s 00:03:13.227 12:42:31 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.227 12:42:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.227 ************************************ 00:03:13.227 END TEST hugepages 00:03:13.227 ************************************ 00:03:13.227 12:42:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:13.227 12:42:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:13.227 12:42:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.227 12:42:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.227 12:42:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:13.227 ************************************ 00:03:13.227 START TEST driver 00:03:13.228 ************************************ 00:03:13.228 12:42:31 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:13.228 * Looking for test storage... 00:03:13.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:13.228 12:42:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:13.228 12:42:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.228 12:42:31 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.760 12:42:33 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:15.760 12:42:33 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.760 12:42:33 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.760 12:42:33 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:15.760 ************************************ 00:03:15.760 START TEST guess_driver 00:03:15.760 ************************************ 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:15.760 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:15.760 Looking for driver=vfio-pci 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.760 12:42:33 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.139 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.398 12:42:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.336 12:42:36 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.876 00:03:20.876 real 0m5.017s 00:03:20.876 user 0m1.149s 00:03:20.876 sys 0m1.951s 00:03:20.876 12:42:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.876 12:42:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:20.876 ************************************ 00:03:20.876 END TEST guess_driver 00:03:20.876 ************************************ 00:03:20.876 12:42:38 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:20.876 00:03:20.876 real 0m7.695s 00:03:20.876 user 0m1.748s 00:03:20.876 sys 0m2.991s 00:03:20.876 12:42:38 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.876 12:42:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:20.876 ************************************ 00:03:20.876 END TEST driver 00:03:20.876 ************************************ 00:03:20.876 12:42:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.876 12:42:39 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:20.876 12:42:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.876 12:42:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.876 12:42:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.876 ************************************ 00:03:20.876 START TEST devices 00:03:20.876 ************************************ 00:03:20.876 12:42:39 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:20.876 * Looking for test storage... 00:03:21.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:21.136 12:42:39 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:21.136 12:42:39 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:21.136 12:42:39 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:21.136 12:42:39 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:22.513 12:42:40 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:22.513 No valid GPT data, bailing 00:03:22.513 12:42:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:22.513 12:42:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:22.513 12:42:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.513 12:42:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:22.513 ************************************ 00:03:22.513 START TEST nvme_mount 00:03:22.513 ************************************ 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:22.513 12:42:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:23.906 Creating new GPT entries in memory. 00:03:23.906 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:23.906 other utilities. 00:03:23.906 12:42:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:23.906 12:42:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:23.906 12:42:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:23.906 12:42:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:23.906 12:42:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:24.882 Creating new GPT entries in memory. 00:03:24.882 The operation has completed successfully. 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3257945 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:24.882 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.883 12:42:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:25.823 12:42:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:26.081 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:26.081 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:26.082 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:26.340 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:26.340 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:26.340 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:26.340 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.340 12:42:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.731 12:42:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:29.107 12:42:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:29.107 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.107 00:03:29.107 real 0m6.375s 00:03:29.107 user 0m1.519s 00:03:29.107 sys 0m2.461s 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.107 12:42:47 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:29.107 ************************************ 00:03:29.107 END TEST nvme_mount 00:03:29.107 ************************************ 00:03:29.107 12:42:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:29.107 12:42:47 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:29.107 12:42:47 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.107 12:42:47 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.107 12:42:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:29.107 ************************************ 00:03:29.107 START TEST dm_mount 00:03:29.107 ************************************ 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:29.107 12:42:47 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:30.042 Creating new GPT entries in memory. 00:03:30.042 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:30.042 other utilities. 00:03:30.042 12:42:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:30.042 12:42:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.042 12:42:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:30.042 12:42:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:30.042 12:42:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:30.975 Creating new GPT entries in memory. 00:03:30.975 The operation has completed successfully. 00:03:30.975 12:42:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:30.975 12:42:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.975 12:42:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:30.975 12:42:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:30.975 12:42:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:32.355 The operation has completed successfully. 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3260352 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:32.355 12:42:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:32.356 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.356 12:42:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.291 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:33.292 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.550 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.550 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:33.550 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.551 12:42:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:03:34.489 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:34.749 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:34.749 00:03:34.749 real 0m5.815s 00:03:34.749 user 0m0.992s 00:03:34.749 sys 0m1.690s 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.749 12:42:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:34.749 ************************************ 00:03:34.749 END TEST dm_mount 00:03:34.749 ************************************ 00:03:35.009 12:42:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.009 12:42:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:35.267 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:35.267 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:35.267 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.267 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.267 12:42:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:35.267 00:03:35.267 real 0m14.212s 00:03:35.267 user 0m3.230s 00:03:35.267 sys 0m5.224s 00:03:35.267 12:42:53 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.267 12:42:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:35.267 ************************************ 00:03:35.267 END TEST devices 00:03:35.267 ************************************ 00:03:35.267 12:42:53 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:35.267 00:03:35.267 real 0m45.018s 00:03:35.267 user 0m13.044s 00:03:35.267 sys 0m20.243s 00:03:35.267 12:42:53 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.267 12:42:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.267 ************************************ 00:03:35.267 END TEST setup.sh 00:03:35.267 ************************************ 00:03:35.267 12:42:53 -- common/autotest_common.sh@1142 -- # return 0 00:03:35.267 12:42:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:36.641 Hugepages 00:03:36.641 node hugesize free / total 00:03:36.641 node0 1048576kB 0 / 0 00:03:36.641 node0 2048kB 2048 / 2048 00:03:36.641 node1 1048576kB 0 / 0 00:03:36.641 node1 2048kB 0 / 0 00:03:36.641 00:03:36.641 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:36.641 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:36.641 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:36.641 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:36.641 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:36.641 12:42:54 -- spdk/autotest.sh@130 -- # uname -s 00:03:36.641 12:42:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:36.641 12:42:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:36.641 12:42:54 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.018 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.018 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:38.018 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.959 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.959 12:42:57 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:39.896 12:42:58 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:39.896 12:42:58 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:39.896 12:42:58 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:39.896 12:42:58 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:39.896 12:42:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:39.896 12:42:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:39.896 12:42:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:39.896 12:42:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:39.896 12:42:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:39.896 12:42:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:39.896 12:42:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:39.896 12:42:58 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:41.276 Waiting for block devices as requested 00:03:41.276 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:03:41.276 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:41.276 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:41.534 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:41.534 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:41.534 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:41.534 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:41.793 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:41.793 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:41.793 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:42.054 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:42.054 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:42.054 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:42.054 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:42.313 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:42.313 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:42.313 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:42.571 12:43:00 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:42.571 12:43:00 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:03:42.571 12:43:00 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:03:42.571 12:43:00 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:42.571 12:43:00 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:42.571 12:43:00 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:42.571 12:43:00 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:42.571 12:43:00 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:42.571 12:43:00 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:42.571 12:43:00 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:42.571 12:43:00 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:42.572 12:43:00 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:42.572 12:43:00 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:42.572 12:43:00 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:42.572 12:43:00 -- common/autotest_common.sh@1557 -- # continue 00:03:42.572 12:43:00 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:42.572 12:43:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:42.572 12:43:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.572 12:43:00 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:42.572 12:43:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:42.572 12:43:00 -- common/autotest_common.sh@10 -- # set +x 00:03:42.572 12:43:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.949 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:43.949 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:43.949 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:44.888 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.888 12:43:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:44.888 12:43:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:44.888 12:43:02 -- common/autotest_common.sh@10 -- # set +x 00:03:44.888 12:43:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:44.888 12:43:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:44.888 12:43:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:44.888 12:43:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:44.888 12:43:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:44.888 12:43:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:44.888 12:43:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:44.888 12:43:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:44.888 12:43:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:44.888 12:43:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:44.888 12:43:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:44.888 12:43:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:44.888 12:43:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:03:44.888 12:43:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:44.888 12:43:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:03:44.888 12:43:03 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:44.888 12:43:03 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:44.888 12:43:03 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:44.888 12:43:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:03:44.888 12:43:03 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:03:44.888 12:43:03 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3265680 00:03:44.888 12:43:03 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:44.888 12:43:03 -- common/autotest_common.sh@1598 -- # waitforlisten 3265680 00:03:44.888 12:43:03 -- common/autotest_common.sh@829 -- # '[' -z 3265680 ']' 00:03:44.888 12:43:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:44.888 12:43:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:44.888 12:43:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:44.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:44.888 12:43:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:44.888 12:43:03 -- common/autotest_common.sh@10 -- # set +x 00:03:45.152 [2024-07-15 12:43:03.126332] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:03:45.152 [2024-07-15 12:43:03.126419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265680 ] 00:03:45.152 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.152 [2024-07-15 12:43:03.184371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.152 [2024-07-15 12:43:03.291090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:45.411 12:43:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:45.411 12:43:03 -- common/autotest_common.sh@862 -- # return 0 00:03:45.411 12:43:03 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:45.411 12:43:03 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:45.411 12:43:03 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:03:48.704 nvme0n1 00:03:48.704 12:43:06 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:48.704 [2024-07-15 12:43:06.846313] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:48.704 [2024-07-15 12:43:06.846358] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:48.704 request: 00:03:48.704 { 00:03:48.704 "nvme_ctrlr_name": "nvme0", 00:03:48.704 "password": "test", 00:03:48.704 "method": "bdev_nvme_opal_revert", 00:03:48.704 "req_id": 1 00:03:48.704 } 00:03:48.704 Got JSON-RPC error response 00:03:48.704 response: 00:03:48.704 { 00:03:48.704 "code": -32603, 00:03:48.704 "message": "Internal error" 00:03:48.704 } 00:03:48.704 12:43:06 -- common/autotest_common.sh@1604 -- # true 00:03:48.705 12:43:06 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:48.705 12:43:06 -- common/autotest_common.sh@1608 -- # killprocess 3265680 00:03:48.705 12:43:06 -- common/autotest_common.sh@948 -- # '[' -z 3265680 ']' 00:03:48.705 12:43:06 -- common/autotest_common.sh@952 -- # kill -0 3265680 00:03:48.705 12:43:06 -- common/autotest_common.sh@953 -- # uname 00:03:48.705 12:43:06 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:48.705 12:43:06 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3265680 00:03:48.705 12:43:06 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:48.705 12:43:06 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:48.705 12:43:06 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3265680' 00:03:48.705 killing process with pid 3265680 00:03:48.705 12:43:06 -- common/autotest_common.sh@967 -- # kill 3265680 00:03:48.705 12:43:06 -- common/autotest_common.sh@972 -- # wait 3265680 00:03:50.602 12:43:08 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:50.602 12:43:08 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:50.602 12:43:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.602 12:43:08 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.602 12:43:08 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:50.602 12:43:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.602 12:43:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.602 12:43:08 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:50.602 12:43:08 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:50.602 12:43:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.602 12:43:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.602 12:43:08 -- common/autotest_common.sh@10 -- # set +x 00:03:50.602 ************************************ 00:03:50.602 START TEST env 00:03:50.602 ************************************ 00:03:50.602 12:43:08 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:50.602 * Looking for test storage... 00:03:50.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:50.602 12:43:08 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.602 12:43:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.602 12:43:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.602 12:43:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.602 ************************************ 00:03:50.602 START TEST env_memory 00:03:50.602 ************************************ 00:03:50.602 12:43:08 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:50.602 00:03:50.602 00:03:50.602 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.602 http://cunit.sourceforge.net/ 00:03:50.602 00:03:50.602 00:03:50.602 Suite: memory 00:03:50.602 Test: alloc and free memory map ...[2024-07-15 12:43:08.797853] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:50.602 passed 00:03:50.861 Test: mem map translation ...[2024-07-15 12:43:08.817421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:50.861 [2024-07-15 12:43:08.817443] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:50.861 [2024-07-15 12:43:08.817501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:50.862 [2024-07-15 12:43:08.817512] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:50.862 passed 00:03:50.862 Test: mem map registration ...[2024-07-15 12:43:08.857973] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:50.862 [2024-07-15 12:43:08.857992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:50.862 passed 00:03:50.862 Test: mem map adjacent registrations ...passed 00:03:50.862 00:03:50.862 Run Summary: Type Total Ran Passed Failed Inactive 00:03:50.862 suites 1 1 n/a 0 0 00:03:50.862 tests 4 4 4 0 0 00:03:50.862 asserts 152 152 152 0 n/a 00:03:50.862 00:03:50.862 Elapsed time = 0.139 seconds 00:03:50.862 00:03:50.862 real 0m0.146s 00:03:50.862 user 0m0.138s 00:03:50.862 sys 0m0.007s 00:03:50.862 12:43:08 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.862 12:43:08 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:50.862 ************************************ 00:03:50.862 END TEST env_memory 00:03:50.862 ************************************ 00:03:50.862 12:43:08 env -- common/autotest_common.sh@1142 -- # return 0 00:03:50.862 12:43:08 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.862 12:43:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.862 12:43:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.862 12:43:08 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.862 ************************************ 00:03:50.862 START TEST env_vtophys 00:03:50.862 ************************************ 00:03:50.862 12:43:08 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:50.862 EAL: lib.eal log level changed from notice to debug 00:03:50.862 EAL: Detected lcore 0 as core 0 on socket 0 00:03:50.862 EAL: Detected lcore 1 as core 1 on socket 0 00:03:50.862 EAL: Detected lcore 2 as core 2 on socket 0 00:03:50.862 EAL: Detected lcore 3 as core 3 on socket 0 00:03:50.862 EAL: Detected lcore 4 as core 4 on socket 0 00:03:50.862 EAL: Detected lcore 5 as core 5 on socket 0 00:03:50.862 EAL: Detected lcore 6 as core 8 on socket 0 00:03:50.862 EAL: Detected lcore 7 as core 9 on socket 0 00:03:50.862 EAL: Detected lcore 8 as core 10 on socket 0 00:03:50.862 EAL: Detected lcore 9 as core 11 on socket 0 00:03:50.862 EAL: Detected lcore 10 as core 12 on socket 0 00:03:50.862 EAL: Detected lcore 11 as core 13 on socket 0 00:03:50.862 EAL: Detected lcore 12 as core 0 on socket 1 00:03:50.862 EAL: Detected lcore 13 as core 1 on socket 1 00:03:50.862 EAL: Detected lcore 14 as core 2 on socket 1 00:03:50.862 EAL: Detected lcore 15 as core 3 on socket 1 00:03:50.862 EAL: Detected lcore 16 as core 4 on socket 1 00:03:50.862 EAL: Detected lcore 17 as core 5 on socket 1 00:03:50.862 EAL: Detected lcore 18 as core 8 on socket 1 00:03:50.862 EAL: Detected lcore 19 as core 9 on socket 1 00:03:50.862 EAL: Detected lcore 20 as core 10 on socket 1 00:03:50.862 EAL: Detected lcore 21 as core 11 on socket 1 00:03:50.862 EAL: Detected lcore 22 as core 12 on socket 1 00:03:50.862 EAL: Detected lcore 23 as core 13 on socket 1 00:03:50.862 EAL: Detected lcore 24 as core 0 on socket 0 00:03:50.862 EAL: Detected lcore 25 as core 1 on socket 0 00:03:50.862 EAL: Detected lcore 26 as core 2 on socket 0 00:03:50.862 EAL: Detected lcore 27 as core 3 on socket 0 00:03:50.862 EAL: Detected lcore 28 as core 4 on socket 0 00:03:50.862 EAL: Detected lcore 29 as core 5 on socket 0 00:03:50.862 EAL: Detected lcore 30 as core 8 on socket 0 00:03:50.862 EAL: Detected lcore 31 as core 9 on socket 0 00:03:50.862 EAL: Detected lcore 32 as core 10 on socket 0 00:03:50.862 EAL: Detected lcore 33 as core 11 on socket 0 00:03:50.862 EAL: Detected lcore 34 as core 12 on socket 0 00:03:50.862 EAL: Detected lcore 35 as core 13 on socket 0 00:03:50.862 EAL: Detected lcore 36 as core 0 on socket 1 00:03:50.862 EAL: Detected lcore 37 as core 1 on socket 1 00:03:50.862 EAL: Detected lcore 38 as core 2 on socket 1 00:03:50.862 EAL: Detected lcore 39 as core 3 on socket 1 00:03:50.862 EAL: Detected lcore 40 as core 4 on socket 1 00:03:50.862 EAL: Detected lcore 41 as core 5 on socket 1 00:03:50.862 EAL: Detected lcore 42 as core 8 on socket 1 00:03:50.862 EAL: Detected lcore 43 as core 9 on socket 1 00:03:50.862 EAL: Detected lcore 44 as core 10 on socket 1 00:03:50.862 EAL: Detected lcore 45 as core 11 on socket 1 00:03:50.862 EAL: Detected lcore 46 as core 12 on socket 1 00:03:50.862 EAL: Detected lcore 47 as core 13 on socket 1 00:03:50.862 EAL: Maximum logical cores by configuration: 128 00:03:50.862 EAL: Detected CPU lcores: 48 00:03:50.862 EAL: Detected NUMA nodes: 2 00:03:50.862 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:50.862 EAL: Detected shared linkage of DPDK 00:03:50.862 EAL: No shared files mode enabled, IPC will be disabled 00:03:50.862 EAL: Bus pci wants IOVA as 'DC' 00:03:50.862 EAL: Buses did not request a specific IOVA mode. 00:03:50.862 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:50.862 EAL: Selected IOVA mode 'VA' 00:03:50.862 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.862 EAL: Probing VFIO support... 00:03:50.862 EAL: IOMMU type 1 (Type 1) is supported 00:03:50.862 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:50.862 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:50.862 EAL: VFIO support initialized 00:03:50.862 EAL: Ask a virtual area of 0x2e000 bytes 00:03:50.862 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:50.862 EAL: Setting up physically contiguous memory... 00:03:50.862 EAL: Setting maximum number of open files to 524288 00:03:50.862 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:50.862 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:50.862 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:50.862 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:50.862 EAL: Ask a virtual area of 0x61000 bytes 00:03:50.862 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:50.862 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:50.862 EAL: Ask a virtual area of 0x400000000 bytes 00:03:50.862 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:50.862 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:50.862 EAL: Hugepages will be freed exactly as allocated. 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: TSC frequency is ~2700000 KHz 00:03:50.862 EAL: Main lcore 0 is ready (tid=7f8a26959a00;cpuset=[0]) 00:03:50.862 EAL: Trying to obtain current memory policy. 00:03:50.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.862 EAL: Restoring previous memory policy: 0 00:03:50.862 EAL: request: mp_malloc_sync 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: Heap on socket 0 was expanded by 2MB 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:50.862 EAL: Mem event callback 'spdk:(nil)' registered 00:03:50.862 00:03:50.862 00:03:50.862 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.862 http://cunit.sourceforge.net/ 00:03:50.862 00:03:50.862 00:03:50.862 Suite: components_suite 00:03:50.862 Test: vtophys_malloc_test ...passed 00:03:50.862 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:50.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.862 EAL: Restoring previous memory policy: 4 00:03:50.862 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.862 EAL: request: mp_malloc_sync 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: Heap on socket 0 was expanded by 4MB 00:03:50.862 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.862 EAL: request: mp_malloc_sync 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.862 EAL: Heap on socket 0 was shrunk by 4MB 00:03:50.862 EAL: Trying to obtain current memory policy. 00:03:50.862 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.862 EAL: Restoring previous memory policy: 4 00:03:50.862 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.862 EAL: request: mp_malloc_sync 00:03:50.862 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was expanded by 6MB 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was shrunk by 6MB 00:03:50.863 EAL: Trying to obtain current memory policy. 00:03:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.863 EAL: Restoring previous memory policy: 4 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was expanded by 10MB 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was shrunk by 10MB 00:03:50.863 EAL: Trying to obtain current memory policy. 00:03:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.863 EAL: Restoring previous memory policy: 4 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was expanded by 18MB 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was shrunk by 18MB 00:03:50.863 EAL: Trying to obtain current memory policy. 00:03:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.863 EAL: Restoring previous memory policy: 4 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was expanded by 34MB 00:03:50.863 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.863 EAL: request: mp_malloc_sync 00:03:50.863 EAL: No shared files mode enabled, IPC is disabled 00:03:50.863 EAL: Heap on socket 0 was shrunk by 34MB 00:03:50.863 EAL: Trying to obtain current memory policy. 00:03:50.863 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.121 EAL: Restoring previous memory policy: 4 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.121 EAL: request: mp_malloc_sync 00:03:51.121 EAL: No shared files mode enabled, IPC is disabled 00:03:51.121 EAL: Heap on socket 0 was expanded by 66MB 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.121 EAL: request: mp_malloc_sync 00:03:51.121 EAL: No shared files mode enabled, IPC is disabled 00:03:51.121 EAL: Heap on socket 0 was shrunk by 66MB 00:03:51.121 EAL: Trying to obtain current memory policy. 00:03:51.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.121 EAL: Restoring previous memory policy: 4 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.121 EAL: request: mp_malloc_sync 00:03:51.121 EAL: No shared files mode enabled, IPC is disabled 00:03:51.121 EAL: Heap on socket 0 was expanded by 130MB 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.121 EAL: request: mp_malloc_sync 00:03:51.121 EAL: No shared files mode enabled, IPC is disabled 00:03:51.121 EAL: Heap on socket 0 was shrunk by 130MB 00:03:51.121 EAL: Trying to obtain current memory policy. 00:03:51.121 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.121 EAL: Restoring previous memory policy: 4 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.121 EAL: request: mp_malloc_sync 00:03:51.121 EAL: No shared files mode enabled, IPC is disabled 00:03:51.121 EAL: Heap on socket 0 was expanded by 258MB 00:03:51.121 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.378 EAL: request: mp_malloc_sync 00:03:51.378 EAL: No shared files mode enabled, IPC is disabled 00:03:51.378 EAL: Heap on socket 0 was shrunk by 258MB 00:03:51.378 EAL: Trying to obtain current memory policy. 00:03:51.378 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.379 EAL: Restoring previous memory policy: 4 00:03:51.379 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.379 EAL: request: mp_malloc_sync 00:03:51.379 EAL: No shared files mode enabled, IPC is disabled 00:03:51.379 EAL: Heap on socket 0 was expanded by 514MB 00:03:51.636 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.636 EAL: request: mp_malloc_sync 00:03:51.636 EAL: No shared files mode enabled, IPC is disabled 00:03:51.636 EAL: Heap on socket 0 was shrunk by 514MB 00:03:51.636 EAL: Trying to obtain current memory policy. 00:03:51.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.925 EAL: Restoring previous memory policy: 4 00:03:51.925 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.925 EAL: request: mp_malloc_sync 00:03:51.925 EAL: No shared files mode enabled, IPC is disabled 00:03:51.925 EAL: Heap on socket 0 was expanded by 1026MB 00:03:52.196 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.196 EAL: request: mp_malloc_sync 00:03:52.196 EAL: No shared files mode enabled, IPC is disabled 00:03:52.196 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:52.196 passed 00:03:52.196 00:03:52.196 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.196 suites 1 1 n/a 0 0 00:03:52.196 tests 2 2 2 0 0 00:03:52.196 asserts 497 497 497 0 n/a 00:03:52.196 00:03:52.196 Elapsed time = 1.334 seconds 00:03:52.196 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.196 EAL: request: mp_malloc_sync 00:03:52.196 EAL: No shared files mode enabled, IPC is disabled 00:03:52.196 EAL: Heap on socket 0 was shrunk by 2MB 00:03:52.196 EAL: No shared files mode enabled, IPC is disabled 00:03:52.196 EAL: No shared files mode enabled, IPC is disabled 00:03:52.196 EAL: No shared files mode enabled, IPC is disabled 00:03:52.454 00:03:52.454 real 0m1.450s 00:03:52.454 user 0m0.836s 00:03:52.454 sys 0m0.580s 00:03:52.454 12:43:10 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.454 12:43:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:52.454 ************************************ 00:03:52.454 END TEST env_vtophys 00:03:52.454 ************************************ 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1142 -- # return 0 00:03:52.454 12:43:10 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.454 12:43:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.454 ************************************ 00:03:52.454 START TEST env_pci 00:03:52.454 ************************************ 00:03:52.454 12:43:10 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:52.454 00:03:52.454 00:03:52.454 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.454 http://cunit.sourceforge.net/ 00:03:52.454 00:03:52.454 00:03:52.454 Suite: pci 00:03:52.454 Test: pci_hook ...[2024-07-15 12:43:10.469915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3266572 has claimed it 00:03:52.454 EAL: Cannot find device (10000:00:01.0) 00:03:52.454 EAL: Failed to attach device on primary process 00:03:52.454 passed 00:03:52.454 00:03:52.454 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.454 suites 1 1 n/a 0 0 00:03:52.454 tests 1 1 1 0 0 00:03:52.454 asserts 25 25 25 0 n/a 00:03:52.454 00:03:52.454 Elapsed time = 0.022 seconds 00:03:52.454 00:03:52.454 real 0m0.034s 00:03:52.454 user 0m0.011s 00:03:52.454 sys 0m0.023s 00:03:52.454 12:43:10 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.454 12:43:10 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:52.454 ************************************ 00:03:52.454 END TEST env_pci 00:03:52.454 ************************************ 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1142 -- # return 0 00:03:52.454 12:43:10 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:52.454 12:43:10 env -- env/env.sh@15 -- # uname 00:03:52.454 12:43:10 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:52.454 12:43:10 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:52.454 12:43:10 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:52.454 12:43:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.454 12:43:10 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.454 ************************************ 00:03:52.454 START TEST env_dpdk_post_init 00:03:52.454 ************************************ 00:03:52.454 12:43:10 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:52.454 EAL: Detected CPU lcores: 48 00:03:52.454 EAL: Detected NUMA nodes: 2 00:03:52.454 EAL: Detected shared linkage of DPDK 00:03:52.454 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.454 EAL: Selected IOVA mode 'VA' 00:03:52.454 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.454 EAL: VFIO support initialized 00:03:52.454 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.454 EAL: Using IOMMU type 1 (Type 1) 00:03:52.454 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:52.712 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:53.645 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:03:56.917 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:03:56.917 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:03:56.917 Starting DPDK initialization... 00:03:56.917 Starting SPDK post initialization... 00:03:56.917 SPDK NVMe probe 00:03:56.917 Attaching to 0000:82:00.0 00:03:56.917 Attached to 0000:82:00.0 00:03:56.917 Cleaning up... 00:03:56.917 00:03:56.917 real 0m4.423s 00:03:56.917 user 0m3.309s 00:03:56.917 sys 0m0.173s 00:03:56.917 12:43:14 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.917 12:43:14 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.917 ************************************ 00:03:56.917 END TEST env_dpdk_post_init 00:03:56.917 ************************************ 00:03:56.917 12:43:14 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.917 12:43:14 env -- env/env.sh@26 -- # uname 00:03:56.917 12:43:14 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:56.917 12:43:14 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.917 12:43:14 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.917 12:43:14 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.917 12:43:14 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.917 ************************************ 00:03:56.917 START TEST env_mem_callbacks 00:03:56.917 ************************************ 00:03:56.917 12:43:15 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:56.917 EAL: Detected CPU lcores: 48 00:03:56.917 EAL: Detected NUMA nodes: 2 00:03:56.917 EAL: Detected shared linkage of DPDK 00:03:56.917 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:56.917 EAL: Selected IOVA mode 'VA' 00:03:56.917 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.917 EAL: VFIO support initialized 00:03:56.917 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:56.917 00:03:56.917 00:03:56.917 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.917 http://cunit.sourceforge.net/ 00:03:56.917 00:03:56.917 00:03:56.917 Suite: memory 00:03:56.917 Test: test ... 00:03:56.917 register 0x200000200000 2097152 00:03:56.917 malloc 3145728 00:03:56.917 register 0x200000400000 4194304 00:03:56.917 buf 0x200000500000 len 3145728 PASSED 00:03:56.917 malloc 64 00:03:56.917 buf 0x2000004fff40 len 64 PASSED 00:03:56.917 malloc 4194304 00:03:56.917 register 0x200000800000 6291456 00:03:56.917 buf 0x200000a00000 len 4194304 PASSED 00:03:56.917 free 0x200000500000 3145728 00:03:56.917 free 0x2000004fff40 64 00:03:56.917 unregister 0x200000400000 4194304 PASSED 00:03:56.917 free 0x200000a00000 4194304 00:03:56.917 unregister 0x200000800000 6291456 PASSED 00:03:56.917 malloc 8388608 00:03:56.917 register 0x200000400000 10485760 00:03:56.917 buf 0x200000600000 len 8388608 PASSED 00:03:56.917 free 0x200000600000 8388608 00:03:56.917 unregister 0x200000400000 10485760 PASSED 00:03:56.917 passed 00:03:56.917 00:03:56.917 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.917 suites 1 1 n/a 0 0 00:03:56.917 tests 1 1 1 0 0 00:03:56.917 asserts 15 15 15 0 n/a 00:03:56.917 00:03:56.917 Elapsed time = 0.005 seconds 00:03:56.917 00:03:56.917 real 0m0.049s 00:03:56.917 user 0m0.016s 00:03:56.917 sys 0m0.033s 00:03:56.917 12:43:15 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.917 12:43:15 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:56.917 ************************************ 00:03:56.917 END TEST env_mem_callbacks 00:03:56.917 ************************************ 00:03:56.917 12:43:15 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.917 00:03:56.917 real 0m6.392s 00:03:56.917 user 0m4.430s 00:03:56.917 sys 0m1.004s 00:03:56.917 12:43:15 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.917 12:43:15 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.917 ************************************ 00:03:56.917 END TEST env 00:03:56.917 ************************************ 00:03:56.917 12:43:15 -- common/autotest_common.sh@1142 -- # return 0 00:03:56.917 12:43:15 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:56.917 12:43:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.917 12:43:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.917 12:43:15 -- common/autotest_common.sh@10 -- # set +x 00:03:57.175 ************************************ 00:03:57.175 START TEST rpc 00:03:57.175 ************************************ 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:57.175 * Looking for test storage... 00:03:57.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.175 12:43:15 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3267226 00:03:57.175 12:43:15 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:57.175 12:43:15 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.175 12:43:15 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3267226 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@829 -- # '[' -z 3267226 ']' 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:57.175 12:43:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.175 [2024-07-15 12:43:15.249169] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:03:57.175 [2024-07-15 12:43:15.249268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267226 ] 00:03:57.175 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.175 [2024-07-15 12:43:15.312400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.434 [2024-07-15 12:43:15.421829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:57.434 [2024-07-15 12:43:15.421888] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3267226' to capture a snapshot of events at runtime. 00:03:57.434 [2024-07-15 12:43:15.421912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:57.434 [2024-07-15 12:43:15.421923] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:57.434 [2024-07-15 12:43:15.421933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3267226 for offline analysis/debug. 00:03:57.434 [2024-07-15 12:43:15.421970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.692 12:43:15 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:57.692 12:43:15 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:57.692 12:43:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.692 12:43:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.692 12:43:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:57.692 12:43:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:57.692 12:43:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.692 12:43:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.692 12:43:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 ************************************ 00:03:57.692 START TEST rpc_integrity 00:03:57.692 ************************************ 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.692 { 00:03:57.692 "name": "Malloc0", 00:03:57.692 "aliases": [ 00:03:57.692 "b708cb28-9fe8-49e8-adc2-2fac36ed1583" 00:03:57.692 ], 00:03:57.692 "product_name": "Malloc disk", 00:03:57.692 "block_size": 512, 00:03:57.692 "num_blocks": 16384, 00:03:57.692 "uuid": "b708cb28-9fe8-49e8-adc2-2fac36ed1583", 00:03:57.692 "assigned_rate_limits": { 00:03:57.692 "rw_ios_per_sec": 0, 00:03:57.692 "rw_mbytes_per_sec": 0, 00:03:57.692 "r_mbytes_per_sec": 0, 00:03:57.692 "w_mbytes_per_sec": 0 00:03:57.692 }, 00:03:57.692 "claimed": false, 00:03:57.692 "zoned": false, 00:03:57.692 "supported_io_types": { 00:03:57.692 "read": true, 00:03:57.692 "write": true, 00:03:57.692 "unmap": true, 00:03:57.692 "flush": true, 00:03:57.692 "reset": true, 00:03:57.692 "nvme_admin": false, 00:03:57.692 "nvme_io": false, 00:03:57.692 "nvme_io_md": false, 00:03:57.692 "write_zeroes": true, 00:03:57.692 "zcopy": true, 00:03:57.692 "get_zone_info": false, 00:03:57.692 "zone_management": false, 00:03:57.692 "zone_append": false, 00:03:57.692 "compare": false, 00:03:57.692 "compare_and_write": false, 00:03:57.692 "abort": true, 00:03:57.692 "seek_hole": false, 00:03:57.692 "seek_data": false, 00:03:57.692 "copy": true, 00:03:57.692 "nvme_iov_md": false 00:03:57.692 }, 00:03:57.692 "memory_domains": [ 00:03:57.692 { 00:03:57.692 "dma_device_id": "system", 00:03:57.692 "dma_device_type": 1 00:03:57.692 }, 00:03:57.692 { 00:03:57.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.692 "dma_device_type": 2 00:03:57.692 } 00:03:57.692 ], 00:03:57.692 "driver_specific": {} 00:03:57.692 } 00:03:57.692 ]' 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 [2024-07-15 12:43:15.797129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.692 [2024-07-15 12:43:15.797168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.692 [2024-07-15 12:43:15.797189] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d03e0 00:03:57.692 [2024-07-15 12:43:15.797202] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.692 [2024-07-15 12:43:15.798429] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.692 [2024-07-15 12:43:15.798452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.692 Passthru0 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.692 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.692 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.692 { 00:03:57.692 "name": "Malloc0", 00:03:57.692 "aliases": [ 00:03:57.692 "b708cb28-9fe8-49e8-adc2-2fac36ed1583" 00:03:57.692 ], 00:03:57.692 "product_name": "Malloc disk", 00:03:57.692 "block_size": 512, 00:03:57.692 "num_blocks": 16384, 00:03:57.692 "uuid": "b708cb28-9fe8-49e8-adc2-2fac36ed1583", 00:03:57.692 "assigned_rate_limits": { 00:03:57.692 "rw_ios_per_sec": 0, 00:03:57.692 "rw_mbytes_per_sec": 0, 00:03:57.692 "r_mbytes_per_sec": 0, 00:03:57.692 "w_mbytes_per_sec": 0 00:03:57.693 }, 00:03:57.693 "claimed": true, 00:03:57.693 "claim_type": "exclusive_write", 00:03:57.693 "zoned": false, 00:03:57.693 "supported_io_types": { 00:03:57.693 "read": true, 00:03:57.693 "write": true, 00:03:57.693 "unmap": true, 00:03:57.693 "flush": true, 00:03:57.693 "reset": true, 00:03:57.693 "nvme_admin": false, 00:03:57.693 "nvme_io": false, 00:03:57.693 "nvme_io_md": false, 00:03:57.693 "write_zeroes": true, 00:03:57.693 "zcopy": true, 00:03:57.693 "get_zone_info": false, 00:03:57.693 "zone_management": false, 00:03:57.693 "zone_append": false, 00:03:57.693 "compare": false, 00:03:57.693 "compare_and_write": false, 00:03:57.693 "abort": true, 00:03:57.693 "seek_hole": false, 00:03:57.693 "seek_data": false, 00:03:57.693 "copy": true, 00:03:57.693 "nvme_iov_md": false 00:03:57.693 }, 00:03:57.693 "memory_domains": [ 00:03:57.693 { 00:03:57.693 "dma_device_id": "system", 00:03:57.693 "dma_device_type": 1 00:03:57.693 }, 00:03:57.693 { 00:03:57.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.693 "dma_device_type": 2 00:03:57.693 } 00:03:57.693 ], 00:03:57.693 "driver_specific": {} 00:03:57.693 }, 00:03:57.693 { 00:03:57.693 "name": "Passthru0", 00:03:57.693 "aliases": [ 00:03:57.693 "1e9bba9f-8922-599f-8639-d8314032fd3f" 00:03:57.693 ], 00:03:57.693 "product_name": "passthru", 00:03:57.693 "block_size": 512, 00:03:57.693 "num_blocks": 16384, 00:03:57.693 "uuid": "1e9bba9f-8922-599f-8639-d8314032fd3f", 00:03:57.693 "assigned_rate_limits": { 00:03:57.693 "rw_ios_per_sec": 0, 00:03:57.693 "rw_mbytes_per_sec": 0, 00:03:57.693 "r_mbytes_per_sec": 0, 00:03:57.693 "w_mbytes_per_sec": 0 00:03:57.693 }, 00:03:57.693 "claimed": false, 00:03:57.693 "zoned": false, 00:03:57.693 "supported_io_types": { 00:03:57.693 "read": true, 00:03:57.693 "write": true, 00:03:57.693 "unmap": true, 00:03:57.693 "flush": true, 00:03:57.693 "reset": true, 00:03:57.693 "nvme_admin": false, 00:03:57.693 "nvme_io": false, 00:03:57.693 "nvme_io_md": false, 00:03:57.693 "write_zeroes": true, 00:03:57.693 "zcopy": true, 00:03:57.693 "get_zone_info": false, 00:03:57.693 "zone_management": false, 00:03:57.693 "zone_append": false, 00:03:57.693 "compare": false, 00:03:57.693 "compare_and_write": false, 00:03:57.693 "abort": true, 00:03:57.693 "seek_hole": false, 00:03:57.693 "seek_data": false, 00:03:57.693 "copy": true, 00:03:57.693 "nvme_iov_md": false 00:03:57.693 }, 00:03:57.693 "memory_domains": [ 00:03:57.693 { 00:03:57.693 "dma_device_id": "system", 00:03:57.693 "dma_device_type": 1 00:03:57.693 }, 00:03:57.693 { 00:03:57.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.693 "dma_device_type": 2 00:03:57.693 } 00:03:57.693 ], 00:03:57.693 "driver_specific": { 00:03:57.693 "passthru": { 00:03:57.693 "name": "Passthru0", 00:03:57.693 "base_bdev_name": "Malloc0" 00:03:57.693 } 00:03:57.693 } 00:03:57.693 } 00:03:57.693 ]' 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.693 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.693 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.950 12:43:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.950 00:03:57.950 real 0m0.211s 00:03:57.950 user 0m0.138s 00:03:57.950 sys 0m0.017s 00:03:57.950 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.950 12:43:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.950 ************************************ 00:03:57.950 END TEST rpc_integrity 00:03:57.950 ************************************ 00:03:57.950 12:43:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.950 12:43:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.950 12:43:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.950 12:43:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.950 12:43:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 ************************************ 00:03:57.951 START TEST rpc_plugins 00:03:57.951 ************************************ 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:57.951 12:43:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.951 12:43:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.951 12:43:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 12:43:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.951 12:43:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.951 { 00:03:57.951 "name": "Malloc1", 00:03:57.951 "aliases": [ 00:03:57.951 "733bd139-567e-4076-a61d-c8c2e4338dd1" 00:03:57.951 ], 00:03:57.951 "product_name": "Malloc disk", 00:03:57.951 "block_size": 4096, 00:03:57.951 "num_blocks": 256, 00:03:57.951 "uuid": "733bd139-567e-4076-a61d-c8c2e4338dd1", 00:03:57.951 "assigned_rate_limits": { 00:03:57.951 "rw_ios_per_sec": 0, 00:03:57.951 "rw_mbytes_per_sec": 0, 00:03:57.951 "r_mbytes_per_sec": 0, 00:03:57.951 "w_mbytes_per_sec": 0 00:03:57.951 }, 00:03:57.951 "claimed": false, 00:03:57.951 "zoned": false, 00:03:57.951 "supported_io_types": { 00:03:57.951 "read": true, 00:03:57.951 "write": true, 00:03:57.951 "unmap": true, 00:03:57.951 "flush": true, 00:03:57.951 "reset": true, 00:03:57.951 "nvme_admin": false, 00:03:57.951 "nvme_io": false, 00:03:57.951 "nvme_io_md": false, 00:03:57.951 "write_zeroes": true, 00:03:57.951 "zcopy": true, 00:03:57.951 "get_zone_info": false, 00:03:57.951 "zone_management": false, 00:03:57.951 "zone_append": false, 00:03:57.951 "compare": false, 00:03:57.951 "compare_and_write": false, 00:03:57.951 "abort": true, 00:03:57.951 "seek_hole": false, 00:03:57.951 "seek_data": false, 00:03:57.951 "copy": true, 00:03:57.951 "nvme_iov_md": false 00:03:57.951 }, 00:03:57.951 "memory_domains": [ 00:03:57.951 { 00:03:57.951 "dma_device_id": "system", 00:03:57.951 "dma_device_type": 1 00:03:57.951 }, 00:03:57.951 { 00:03:57.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.951 "dma_device_type": 2 00:03:57.951 } 00:03:57.951 ], 00:03:57.951 "driver_specific": {} 00:03:57.951 } 00:03:57.951 ]' 00:03:57.951 12:43:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.951 12:43:16 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.951 00:03:57.951 real 0m0.105s 00:03:57.951 user 0m0.068s 00:03:57.951 sys 0m0.011s 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.951 12:43:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 ************************************ 00:03:57.951 END TEST rpc_plugins 00:03:57.951 ************************************ 00:03:57.951 12:43:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:57.951 12:43:16 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.951 12:43:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.951 12:43:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.951 12:43:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 ************************************ 00:03:57.951 START TEST rpc_trace_cmd_test 00:03:57.951 ************************************ 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.951 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3267226", 00:03:57.951 "tpoint_group_mask": "0x8", 00:03:57.951 "iscsi_conn": { 00:03:57.951 "mask": "0x2", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "scsi": { 00:03:57.951 "mask": "0x4", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "bdev": { 00:03:57.951 "mask": "0x8", 00:03:57.951 "tpoint_mask": "0xffffffffffffffff" 00:03:57.951 }, 00:03:57.951 "nvmf_rdma": { 00:03:57.951 "mask": "0x10", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "nvmf_tcp": { 00:03:57.951 "mask": "0x20", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "ftl": { 00:03:57.951 "mask": "0x40", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "blobfs": { 00:03:57.951 "mask": "0x80", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "dsa": { 00:03:57.951 "mask": "0x200", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "thread": { 00:03:57.951 "mask": "0x400", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "nvme_pcie": { 00:03:57.951 "mask": "0x800", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "iaa": { 00:03:57.951 "mask": "0x1000", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "nvme_tcp": { 00:03:57.951 "mask": "0x2000", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "bdev_nvme": { 00:03:57.951 "mask": "0x4000", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 }, 00:03:57.951 "sock": { 00:03:57.951 "mask": "0x8000", 00:03:57.951 "tpoint_mask": "0x0" 00:03:57.951 } 00:03:57.951 }' 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:57.951 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:58.210 00:03:58.210 real 0m0.200s 00:03:58.210 user 0m0.177s 00:03:58.210 sys 0m0.014s 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.210 12:43:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:58.210 ************************************ 00:03:58.210 END TEST rpc_trace_cmd_test 00:03:58.210 ************************************ 00:03:58.210 12:43:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.210 12:43:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:58.210 12:43:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:58.210 12:43:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:58.210 12:43:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.210 12:43:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.210 12:43:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.210 ************************************ 00:03:58.210 START TEST rpc_daemon_integrity 00:03:58.210 ************************************ 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:58.210 { 00:03:58.210 "name": "Malloc2", 00:03:58.210 "aliases": [ 00:03:58.210 "780a0f69-2a9f-49ca-b0d1-1374b9210ddb" 00:03:58.210 ], 00:03:58.210 "product_name": "Malloc disk", 00:03:58.210 "block_size": 512, 00:03:58.210 "num_blocks": 16384, 00:03:58.210 "uuid": "780a0f69-2a9f-49ca-b0d1-1374b9210ddb", 00:03:58.210 "assigned_rate_limits": { 00:03:58.210 "rw_ios_per_sec": 0, 00:03:58.210 "rw_mbytes_per_sec": 0, 00:03:58.210 "r_mbytes_per_sec": 0, 00:03:58.210 "w_mbytes_per_sec": 0 00:03:58.210 }, 00:03:58.210 "claimed": false, 00:03:58.210 "zoned": false, 00:03:58.210 "supported_io_types": { 00:03:58.210 "read": true, 00:03:58.210 "write": true, 00:03:58.210 "unmap": true, 00:03:58.210 "flush": true, 00:03:58.210 "reset": true, 00:03:58.210 "nvme_admin": false, 00:03:58.210 "nvme_io": false, 00:03:58.210 "nvme_io_md": false, 00:03:58.210 "write_zeroes": true, 00:03:58.210 "zcopy": true, 00:03:58.210 "get_zone_info": false, 00:03:58.210 "zone_management": false, 00:03:58.210 "zone_append": false, 00:03:58.210 "compare": false, 00:03:58.210 "compare_and_write": false, 00:03:58.210 "abort": true, 00:03:58.210 "seek_hole": false, 00:03:58.210 "seek_data": false, 00:03:58.210 "copy": true, 00:03:58.210 "nvme_iov_md": false 00:03:58.210 }, 00:03:58.210 "memory_domains": [ 00:03:58.210 { 00:03:58.210 "dma_device_id": "system", 00:03:58.210 "dma_device_type": 1 00:03:58.210 }, 00:03:58.210 { 00:03:58.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.210 "dma_device_type": 2 00:03:58.210 } 00:03:58.210 ], 00:03:58.210 "driver_specific": {} 00:03:58.210 } 00:03:58.210 ]' 00:03:58.210 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 [2024-07-15 12:43:16.459009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:58.468 [2024-07-15 12:43:16.459077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:58.468 [2024-07-15 12:43:16.459112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x256e2f0 00:03:58.468 [2024-07-15 12:43:16.459125] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:58.468 [2024-07-15 12:43:16.460267] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:58.468 [2024-07-15 12:43:16.460289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:58.468 Passthru0 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:58.468 { 00:03:58.468 "name": "Malloc2", 00:03:58.468 "aliases": [ 00:03:58.468 "780a0f69-2a9f-49ca-b0d1-1374b9210ddb" 00:03:58.468 ], 00:03:58.468 "product_name": "Malloc disk", 00:03:58.468 "block_size": 512, 00:03:58.468 "num_blocks": 16384, 00:03:58.468 "uuid": "780a0f69-2a9f-49ca-b0d1-1374b9210ddb", 00:03:58.468 "assigned_rate_limits": { 00:03:58.468 "rw_ios_per_sec": 0, 00:03:58.468 "rw_mbytes_per_sec": 0, 00:03:58.468 "r_mbytes_per_sec": 0, 00:03:58.468 "w_mbytes_per_sec": 0 00:03:58.468 }, 00:03:58.468 "claimed": true, 00:03:58.468 "claim_type": "exclusive_write", 00:03:58.468 "zoned": false, 00:03:58.468 "supported_io_types": { 00:03:58.468 "read": true, 00:03:58.468 "write": true, 00:03:58.468 "unmap": true, 00:03:58.468 "flush": true, 00:03:58.468 "reset": true, 00:03:58.468 "nvme_admin": false, 00:03:58.468 "nvme_io": false, 00:03:58.468 "nvme_io_md": false, 00:03:58.468 "write_zeroes": true, 00:03:58.468 "zcopy": true, 00:03:58.468 "get_zone_info": false, 00:03:58.468 "zone_management": false, 00:03:58.468 "zone_append": false, 00:03:58.468 "compare": false, 00:03:58.468 "compare_and_write": false, 00:03:58.468 "abort": true, 00:03:58.468 "seek_hole": false, 00:03:58.468 "seek_data": false, 00:03:58.468 "copy": true, 00:03:58.468 "nvme_iov_md": false 00:03:58.468 }, 00:03:58.468 "memory_domains": [ 00:03:58.468 { 00:03:58.468 "dma_device_id": "system", 00:03:58.468 "dma_device_type": 1 00:03:58.468 }, 00:03:58.468 { 00:03:58.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.468 "dma_device_type": 2 00:03:58.468 } 00:03:58.468 ], 00:03:58.468 "driver_specific": {} 00:03:58.468 }, 00:03:58.468 { 00:03:58.468 "name": "Passthru0", 00:03:58.468 "aliases": [ 00:03:58.468 "7b656c2d-2d84-5e03-bc4e-b56a903bc7c7" 00:03:58.468 ], 00:03:58.468 "product_name": "passthru", 00:03:58.468 "block_size": 512, 00:03:58.468 "num_blocks": 16384, 00:03:58.468 "uuid": "7b656c2d-2d84-5e03-bc4e-b56a903bc7c7", 00:03:58.468 "assigned_rate_limits": { 00:03:58.468 "rw_ios_per_sec": 0, 00:03:58.468 "rw_mbytes_per_sec": 0, 00:03:58.468 "r_mbytes_per_sec": 0, 00:03:58.468 "w_mbytes_per_sec": 0 00:03:58.468 }, 00:03:58.468 "claimed": false, 00:03:58.468 "zoned": false, 00:03:58.468 "supported_io_types": { 00:03:58.468 "read": true, 00:03:58.468 "write": true, 00:03:58.468 "unmap": true, 00:03:58.468 "flush": true, 00:03:58.468 "reset": true, 00:03:58.468 "nvme_admin": false, 00:03:58.468 "nvme_io": false, 00:03:58.468 "nvme_io_md": false, 00:03:58.468 "write_zeroes": true, 00:03:58.468 "zcopy": true, 00:03:58.468 "get_zone_info": false, 00:03:58.468 "zone_management": false, 00:03:58.468 "zone_append": false, 00:03:58.468 "compare": false, 00:03:58.468 "compare_and_write": false, 00:03:58.468 "abort": true, 00:03:58.468 "seek_hole": false, 00:03:58.468 "seek_data": false, 00:03:58.468 "copy": true, 00:03:58.468 "nvme_iov_md": false 00:03:58.468 }, 00:03:58.468 "memory_domains": [ 00:03:58.468 { 00:03:58.468 "dma_device_id": "system", 00:03:58.468 "dma_device_type": 1 00:03:58.468 }, 00:03:58.468 { 00:03:58.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:58.468 "dma_device_type": 2 00:03:58.468 } 00:03:58.468 ], 00:03:58.468 "driver_specific": { 00:03:58.468 "passthru": { 00:03:58.468 "name": "Passthru0", 00:03:58.468 "base_bdev_name": "Malloc2" 00:03:58.468 } 00:03:58.468 } 00:03:58.468 } 00:03:58.468 ]' 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:58.468 00:03:58.468 real 0m0.218s 00:03:58.468 user 0m0.140s 00:03:58.468 sys 0m0.020s 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.468 12:43:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:58.468 ************************************ 00:03:58.468 END TEST rpc_daemon_integrity 00:03:58.468 ************************************ 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:58.468 12:43:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:58.468 12:43:16 rpc -- rpc/rpc.sh@84 -- # killprocess 3267226 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@948 -- # '[' -z 3267226 ']' 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@952 -- # kill -0 3267226 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@953 -- # uname 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3267226 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3267226' 00:03:58.468 killing process with pid 3267226 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@967 -- # kill 3267226 00:03:58.468 12:43:16 rpc -- common/autotest_common.sh@972 -- # wait 3267226 00:03:59.033 00:03:59.033 real 0m1.910s 00:03:59.033 user 0m2.362s 00:03:59.033 sys 0m0.590s 00:03:59.033 12:43:17 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.033 12:43:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 ************************************ 00:03:59.033 END TEST rpc 00:03:59.033 ************************************ 00:03:59.033 12:43:17 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.033 12:43:17 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:59.033 12:43:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.033 12:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.033 12:43:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 ************************************ 00:03:59.033 START TEST skip_rpc 00:03:59.033 ************************************ 00:03:59.033 12:43:17 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:59.033 * Looking for test storage... 00:03:59.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.033 12:43:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:59.033 12:43:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:59.033 12:43:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:59.033 12:43:17 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.033 12:43:17 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.033 12:43:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.033 ************************************ 00:03:59.033 START TEST skip_rpc 00:03:59.033 ************************************ 00:03:59.033 12:43:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:59.033 12:43:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3267665 00:03:59.033 12:43:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:59.033 12:43:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.033 12:43:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:59.033 [2024-07-15 12:43:17.235747] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:03:59.033 [2024-07-15 12:43:17.235832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3267665 ] 00:03:59.291 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.291 [2024-07-15 12:43:17.291081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.291 [2024-07-15 12:43:17.392914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.551 12:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.551 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:04.551 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.551 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:04.551 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3267665 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3267665 ']' 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3267665 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3267665 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3267665' 00:04:04.552 killing process with pid 3267665 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3267665 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3267665 00:04:04.552 00:04:04.552 real 0m5.454s 00:04:04.552 user 0m5.167s 00:04:04.552 sys 0m0.292s 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.552 12:43:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.552 ************************************ 00:04:04.552 END TEST skip_rpc 00:04:04.552 ************************************ 00:04:04.552 12:43:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:04.552 12:43:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.552 12:43:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.552 12:43:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.552 12:43:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.552 ************************************ 00:04:04.552 START TEST skip_rpc_with_json 00:04:04.552 ************************************ 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3268360 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3268360 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3268360 ']' 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.552 12:43:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.552 [2024-07-15 12:43:22.743244] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:04.552 [2024-07-15 12:43:22.743347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268360 ] 00:04:04.810 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.810 [2024-07-15 12:43:22.805607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.810 [2024-07-15 12:43:22.915850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.068 [2024-07-15 12:43:23.163770] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:05.068 request: 00:04:05.068 { 00:04:05.068 "trtype": "tcp", 00:04:05.068 "method": "nvmf_get_transports", 00:04:05.068 "req_id": 1 00:04:05.068 } 00:04:05.068 Got JSON-RPC error response 00:04:05.068 response: 00:04:05.068 { 00:04:05.068 "code": -19, 00:04:05.068 "message": "No such device" 00:04:05.068 } 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.068 [2024-07-15 12:43:23.171901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.068 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.325 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.325 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.325 { 00:04:05.325 "subsystems": [ 00:04:05.325 { 00:04:05.325 "subsystem": "vfio_user_target", 00:04:05.325 "config": null 00:04:05.325 }, 00:04:05.325 { 00:04:05.325 "subsystem": "keyring", 00:04:05.325 "config": [] 00:04:05.325 }, 00:04:05.325 { 00:04:05.325 "subsystem": "iobuf", 00:04:05.325 "config": [ 00:04:05.325 { 00:04:05.325 "method": "iobuf_set_options", 00:04:05.325 "params": { 00:04:05.325 "small_pool_count": 8192, 00:04:05.325 "large_pool_count": 1024, 00:04:05.325 "small_bufsize": 8192, 00:04:05.325 "large_bufsize": 135168 00:04:05.325 } 00:04:05.325 } 00:04:05.325 ] 00:04:05.325 }, 00:04:05.325 { 00:04:05.325 "subsystem": "sock", 00:04:05.325 "config": [ 00:04:05.325 { 00:04:05.325 "method": "sock_set_default_impl", 00:04:05.325 "params": { 00:04:05.325 "impl_name": "posix" 00:04:05.325 } 00:04:05.325 }, 00:04:05.325 { 00:04:05.325 "method": "sock_impl_set_options", 00:04:05.325 "params": { 00:04:05.325 "impl_name": "ssl", 00:04:05.325 "recv_buf_size": 4096, 00:04:05.325 "send_buf_size": 4096, 00:04:05.325 "enable_recv_pipe": true, 00:04:05.325 "enable_quickack": false, 00:04:05.325 "enable_placement_id": 0, 00:04:05.325 "enable_zerocopy_send_server": true, 00:04:05.325 "enable_zerocopy_send_client": false, 00:04:05.325 "zerocopy_threshold": 0, 00:04:05.325 "tls_version": 0, 00:04:05.325 "enable_ktls": false 00:04:05.325 } 00:04:05.325 }, 00:04:05.325 { 00:04:05.325 "method": "sock_impl_set_options", 00:04:05.325 "params": { 00:04:05.326 "impl_name": "posix", 00:04:05.326 "recv_buf_size": 2097152, 00:04:05.326 "send_buf_size": 2097152, 00:04:05.326 "enable_recv_pipe": true, 00:04:05.326 "enable_quickack": false, 00:04:05.326 "enable_placement_id": 0, 00:04:05.326 "enable_zerocopy_send_server": true, 00:04:05.326 "enable_zerocopy_send_client": false, 00:04:05.326 "zerocopy_threshold": 0, 00:04:05.326 "tls_version": 0, 00:04:05.326 "enable_ktls": false 00:04:05.326 } 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "vmd", 00:04:05.326 "config": [] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "accel", 00:04:05.326 "config": [ 00:04:05.326 { 00:04:05.326 "method": "accel_set_options", 00:04:05.326 "params": { 00:04:05.326 "small_cache_size": 128, 00:04:05.326 "large_cache_size": 16, 00:04:05.326 "task_count": 2048, 00:04:05.326 "sequence_count": 2048, 00:04:05.326 "buf_count": 2048 00:04:05.326 } 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "bdev", 00:04:05.326 "config": [ 00:04:05.326 { 00:04:05.326 "method": "bdev_set_options", 00:04:05.326 "params": { 00:04:05.326 "bdev_io_pool_size": 65535, 00:04:05.326 "bdev_io_cache_size": 256, 00:04:05.326 "bdev_auto_examine": true, 00:04:05.326 "iobuf_small_cache_size": 128, 00:04:05.326 "iobuf_large_cache_size": 16 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "bdev_raid_set_options", 00:04:05.326 "params": { 00:04:05.326 "process_window_size_kb": 1024 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "bdev_iscsi_set_options", 00:04:05.326 "params": { 00:04:05.326 "timeout_sec": 30 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "bdev_nvme_set_options", 00:04:05.326 "params": { 00:04:05.326 "action_on_timeout": "none", 00:04:05.326 "timeout_us": 0, 00:04:05.326 "timeout_admin_us": 0, 00:04:05.326 "keep_alive_timeout_ms": 10000, 00:04:05.326 "arbitration_burst": 0, 00:04:05.326 "low_priority_weight": 0, 00:04:05.326 "medium_priority_weight": 0, 00:04:05.326 "high_priority_weight": 0, 00:04:05.326 "nvme_adminq_poll_period_us": 10000, 00:04:05.326 "nvme_ioq_poll_period_us": 0, 00:04:05.326 "io_queue_requests": 0, 00:04:05.326 "delay_cmd_submit": true, 00:04:05.326 "transport_retry_count": 4, 00:04:05.326 "bdev_retry_count": 3, 00:04:05.326 "transport_ack_timeout": 0, 00:04:05.326 "ctrlr_loss_timeout_sec": 0, 00:04:05.326 "reconnect_delay_sec": 0, 00:04:05.326 "fast_io_fail_timeout_sec": 0, 00:04:05.326 "disable_auto_failback": false, 00:04:05.326 "generate_uuids": false, 00:04:05.326 "transport_tos": 0, 00:04:05.326 "nvme_error_stat": false, 00:04:05.326 "rdma_srq_size": 0, 00:04:05.326 "io_path_stat": false, 00:04:05.326 "allow_accel_sequence": false, 00:04:05.326 "rdma_max_cq_size": 0, 00:04:05.326 "rdma_cm_event_timeout_ms": 0, 00:04:05.326 "dhchap_digests": [ 00:04:05.326 "sha256", 00:04:05.326 "sha384", 00:04:05.326 "sha512" 00:04:05.326 ], 00:04:05.326 "dhchap_dhgroups": [ 00:04:05.326 "null", 00:04:05.326 "ffdhe2048", 00:04:05.326 "ffdhe3072", 00:04:05.326 "ffdhe4096", 00:04:05.326 "ffdhe6144", 00:04:05.326 "ffdhe8192" 00:04:05.326 ] 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "bdev_nvme_set_hotplug", 00:04:05.326 "params": { 00:04:05.326 "period_us": 100000, 00:04:05.326 "enable": false 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "bdev_wait_for_examine" 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "scsi", 00:04:05.326 "config": null 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "scheduler", 00:04:05.326 "config": [ 00:04:05.326 { 00:04:05.326 "method": "framework_set_scheduler", 00:04:05.326 "params": { 00:04:05.326 "name": "static" 00:04:05.326 } 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "vhost_scsi", 00:04:05.326 "config": [] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "vhost_blk", 00:04:05.326 "config": [] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "ublk", 00:04:05.326 "config": [] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "nbd", 00:04:05.326 "config": [] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "nvmf", 00:04:05.326 "config": [ 00:04:05.326 { 00:04:05.326 "method": "nvmf_set_config", 00:04:05.326 "params": { 00:04:05.326 "discovery_filter": "match_any", 00:04:05.326 "admin_cmd_passthru": { 00:04:05.326 "identify_ctrlr": false 00:04:05.326 } 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "nvmf_set_max_subsystems", 00:04:05.326 "params": { 00:04:05.326 "max_subsystems": 1024 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "nvmf_set_crdt", 00:04:05.326 "params": { 00:04:05.326 "crdt1": 0, 00:04:05.326 "crdt2": 0, 00:04:05.326 "crdt3": 0 00:04:05.326 } 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "method": "nvmf_create_transport", 00:04:05.326 "params": { 00:04:05.326 "trtype": "TCP", 00:04:05.326 "max_queue_depth": 128, 00:04:05.326 "max_io_qpairs_per_ctrlr": 127, 00:04:05.326 "in_capsule_data_size": 4096, 00:04:05.326 "max_io_size": 131072, 00:04:05.326 "io_unit_size": 131072, 00:04:05.326 "max_aq_depth": 128, 00:04:05.326 "num_shared_buffers": 511, 00:04:05.326 "buf_cache_size": 4294967295, 00:04:05.326 "dif_insert_or_strip": false, 00:04:05.326 "zcopy": false, 00:04:05.326 "c2h_success": true, 00:04:05.326 "sock_priority": 0, 00:04:05.326 "abort_timeout_sec": 1, 00:04:05.326 "ack_timeout": 0, 00:04:05.326 "data_wr_pool_size": 0 00:04:05.326 } 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 }, 00:04:05.326 { 00:04:05.326 "subsystem": "iscsi", 00:04:05.326 "config": [ 00:04:05.326 { 00:04:05.326 "method": "iscsi_set_options", 00:04:05.326 "params": { 00:04:05.326 "node_base": "iqn.2016-06.io.spdk", 00:04:05.326 "max_sessions": 128, 00:04:05.326 "max_connections_per_session": 2, 00:04:05.326 "max_queue_depth": 64, 00:04:05.326 "default_time2wait": 2, 00:04:05.326 "default_time2retain": 20, 00:04:05.326 "first_burst_length": 8192, 00:04:05.326 "immediate_data": true, 00:04:05.326 "allow_duplicated_isid": false, 00:04:05.326 "error_recovery_level": 0, 00:04:05.326 "nop_timeout": 60, 00:04:05.326 "nop_in_interval": 30, 00:04:05.326 "disable_chap": false, 00:04:05.326 "require_chap": false, 00:04:05.326 "mutual_chap": false, 00:04:05.326 "chap_group": 0, 00:04:05.326 "max_large_datain_per_connection": 64, 00:04:05.326 "max_r2t_per_connection": 4, 00:04:05.326 "pdu_pool_size": 36864, 00:04:05.326 "immediate_data_pool_size": 16384, 00:04:05.326 "data_out_pool_size": 2048 00:04:05.326 } 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 } 00:04:05.326 ] 00:04:05.326 } 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3268360 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3268360 ']' 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3268360 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268360 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268360' 00:04:05.326 killing process with pid 3268360 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3268360 00:04:05.326 12:43:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3268360 00:04:05.584 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3268500 00:04:05.584 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.584 12:43:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3268500 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3268500 ']' 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3268500 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268500 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268500' 00:04:10.840 killing process with pid 3268500 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3268500 00:04:10.840 12:43:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3268500 00:04:11.098 12:43:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:11.099 00:04:11.099 real 0m6.550s 00:04:11.099 user 0m6.160s 00:04:11.099 sys 0m0.635s 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.099 ************************************ 00:04:11.099 END TEST skip_rpc_with_json 00:04:11.099 ************************************ 00:04:11.099 12:43:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.099 12:43:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:11.099 12:43:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.099 12:43:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.099 12:43:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.099 ************************************ 00:04:11.099 START TEST skip_rpc_with_delay 00:04:11.099 ************************************ 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:11.099 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.356 [2024-07-15 12:43:29.345100] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:11.356 [2024-07-15 12:43:29.345216] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.356 00:04:11.356 real 0m0.070s 00:04:11.356 user 0m0.047s 00:04:11.356 sys 0m0.023s 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.356 12:43:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 ************************************ 00:04:11.356 END TEST skip_rpc_with_delay 00:04:11.356 ************************************ 00:04:11.356 12:43:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.356 12:43:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:11.356 12:43:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:11.356 12:43:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:11.356 12:43:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.356 12:43:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.356 12:43:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 ************************************ 00:04:11.356 START TEST exit_on_failed_rpc_init 00:04:11.356 ************************************ 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3269213 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3269213 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3269213 ']' 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.356 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.356 [2024-07-15 12:43:29.460648] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:11.356 [2024-07-15 12:43:29.460759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269213 ] 00:04:11.356 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.356 [2024-07-15 12:43:29.517432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.614 [2024-07-15 12:43:29.629968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:11.872 12:43:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.872 [2024-07-15 12:43:29.931205] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:11.872 [2024-07-15 12:43:29.931299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269344 ] 00:04:11.872 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.872 [2024-07-15 12:43:29.987875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.130 [2024-07-15 12:43:30.109248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.130 [2024-07-15 12:43:30.109386] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:12.130 [2024-07-15 12:43:30.109408] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:12.130 [2024-07-15 12:43:30.109420] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3269213 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3269213 ']' 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3269213 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3269213 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3269213' 00:04:12.130 killing process with pid 3269213 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3269213 00:04:12.130 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3269213 00:04:12.694 00:04:12.694 real 0m1.280s 00:04:12.694 user 0m1.469s 00:04:12.694 sys 0m0.414s 00:04:12.694 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.694 12:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.694 ************************************ 00:04:12.694 END TEST exit_on_failed_rpc_init 00:04:12.694 ************************************ 00:04:12.694 12:43:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.694 12:43:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.694 00:04:12.694 real 0m13.612s 00:04:12.694 user 0m12.934s 00:04:12.694 sys 0m1.544s 00:04:12.694 12:43:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.694 12:43:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.694 ************************************ 00:04:12.694 END TEST skip_rpc 00:04:12.694 ************************************ 00:04:12.694 12:43:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.694 12:43:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.694 12:43:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.694 12:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.694 12:43:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.694 ************************************ 00:04:12.694 START TEST rpc_client 00:04:12.694 ************************************ 00:04:12.694 12:43:30 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.694 * Looking for test storage... 00:04:12.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:12.694 12:43:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:12.694 OK 00:04:12.694 12:43:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.694 00:04:12.694 real 0m0.071s 00:04:12.694 user 0m0.035s 00:04:12.694 sys 0m0.041s 00:04:12.694 12:43:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.694 12:43:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.694 ************************************ 00:04:12.694 END TEST rpc_client 00:04:12.694 ************************************ 00:04:12.694 12:43:30 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.694 12:43:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.694 12:43:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.694 12:43:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.694 12:43:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.694 ************************************ 00:04:12.694 START TEST json_config 00:04:12.694 ************************************ 00:04:12.694 12:43:30 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:12.951 12:43:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:12.951 12:43:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:12.951 12:43:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:12.951 12:43:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.951 12:43:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.951 12:43:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.951 12:43:30 json_config -- paths/export.sh@5 -- # export PATH 00:04:12.951 12:43:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@47 -- # : 0 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:12.951 12:43:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:12.951 12:43:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:12.952 INFO: JSON configuration test init 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.952 12:43:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:12.952 12:43:30 json_config -- json_config/common.sh@9 -- # local app=target 00:04:12.952 12:43:30 json_config -- json_config/common.sh@10 -- # shift 00:04:12.952 12:43:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:12.952 12:43:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:12.952 12:43:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:12.952 12:43:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.952 12:43:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:12.952 12:43:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3269532 00:04:12.952 12:43:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:12.952 12:43:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:12.952 Waiting for target to run... 00:04:12.952 12:43:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3269532 /var/tmp/spdk_tgt.sock 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 3269532 ']' 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:12.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:12.952 12:43:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.952 [2024-07-15 12:43:30.982387] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:12.952 [2024-07-15 12:43:30.982481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3269532 ] 00:04:12.952 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.515 [2024-07-15 12:43:31.497690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.516 [2024-07-15 12:43:31.593557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:13.772 12:43:31 json_config -- json_config/common.sh@26 -- # echo '' 00:04:13.772 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:13.772 12:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:13.772 12:43:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:13.772 12:43:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:17.080 12:43:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.080 12:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:17.080 12:43:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:17.080 12:43:35 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:17.338 12:43:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.338 12:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:17.338 12:43:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.338 12:43:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:17.338 12:43:35 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.338 12:43:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.596 MallocForNvmf0 00:04:17.596 12:43:35 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.596 12:43:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.854 MallocForNvmf1 00:04:17.854 12:43:35 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.854 12:43:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:18.113 [2024-07-15 12:43:36.088195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.113 12:43:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.113 12:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.371 12:43:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.371 12:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.629 12:43:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.629 12:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.888 12:43:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.888 12:43:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:18.888 [2024-07-15 12:43:37.059279] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.888 12:43:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:18.888 12:43:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.888 12:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.146 12:43:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:19.146 12:43:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.146 12:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.146 12:43:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:19.146 12:43:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.146 12:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.146 MallocBdevForConfigChangeCheck 00:04:19.405 12:43:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:19.405 12:43:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.405 12:43:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.405 12:43:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:19.405 12:43:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.664 12:43:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:19.664 INFO: shutting down applications... 00:04:19.664 12:43:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:19.664 12:43:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:19.664 12:43:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:19.664 12:43:37 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:21.567 Calling clear_iscsi_subsystem 00:04:21.567 Calling clear_nvmf_subsystem 00:04:21.567 Calling clear_nbd_subsystem 00:04:21.567 Calling clear_ublk_subsystem 00:04:21.567 Calling clear_vhost_blk_subsystem 00:04:21.567 Calling clear_vhost_scsi_subsystem 00:04:21.567 Calling clear_bdev_subsystem 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:21.567 12:43:39 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:21.837 12:43:39 json_config -- json_config/json_config.sh@345 -- # break 00:04:21.837 12:43:39 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:21.837 12:43:39 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:21.837 12:43:39 json_config -- json_config/common.sh@31 -- # local app=target 00:04:21.837 12:43:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.837 12:43:39 json_config -- json_config/common.sh@35 -- # [[ -n 3269532 ]] 00:04:21.837 12:43:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3269532 00:04:21.837 12:43:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.837 12:43:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.837 12:43:39 json_config -- json_config/common.sh@41 -- # kill -0 3269532 00:04:21.837 12:43:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.103 12:43:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.103 12:43:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.103 12:43:40 json_config -- json_config/common.sh@41 -- # kill -0 3269532 00:04:22.103 12:43:40 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.103 12:43:40 json_config -- json_config/common.sh@43 -- # break 00:04:22.103 12:43:40 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.103 12:43:40 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.103 SPDK target shutdown done 00:04:22.103 12:43:40 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:22.103 INFO: relaunching applications... 00:04:22.103 12:43:40 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.103 12:43:40 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.103 12:43:40 json_config -- json_config/common.sh@10 -- # shift 00:04:22.103 12:43:40 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.103 12:43:40 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.103 12:43:40 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.103 12:43:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.103 12:43:40 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.103 12:43:40 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3270777 00:04:22.103 12:43:40 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.103 12:43:40 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.103 Waiting for target to run... 00:04:22.103 12:43:40 json_config -- json_config/common.sh@25 -- # waitforlisten 3270777 /var/tmp/spdk_tgt.sock 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@829 -- # '[' -z 3270777 ']' 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.103 12:43:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.360 [2024-07-15 12:43:40.347530] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:22.360 [2024-07-15 12:43:40.347639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3270777 ] 00:04:22.360 EAL: No free 2048 kB hugepages reported on node 1 00:04:22.620 [2024-07-15 12:43:40.702159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.620 [2024-07-15 12:43:40.778996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.912 [2024-07-15 12:43:43.805600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.912 [2024-07-15 12:43:43.838051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:25.912 12:43:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:25.912 12:43:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:25.912 12:43:43 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.912 00:04:25.912 12:43:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:25.912 12:43:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:25.912 INFO: Checking if target configuration is the same... 00:04:25.912 12:43:43 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.912 12:43:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:25.912 12:43:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:25.912 + '[' 2 -ne 2 ']' 00:04:25.912 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:25.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:25.912 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.912 +++ basename /dev/fd/62 00:04:25.912 ++ mktemp /tmp/62.XXX 00:04:25.912 + tmp_file_1=/tmp/62.ZBJ 00:04:25.912 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.912 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:25.912 + tmp_file_2=/tmp/spdk_tgt_config.json.eIb 00:04:25.912 + ret=0 00:04:25.912 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.171 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:26.171 + diff -u /tmp/62.ZBJ /tmp/spdk_tgt_config.json.eIb 00:04:26.171 + echo 'INFO: JSON config files are the same' 00:04:26.171 INFO: JSON config files are the same 00:04:26.171 + rm /tmp/62.ZBJ /tmp/spdk_tgt_config.json.eIb 00:04:26.171 + exit 0 00:04:26.171 12:43:44 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:26.171 12:43:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:26.171 INFO: changing configuration and checking if this can be detected... 00:04:26.171 12:43:44 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.171 12:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:26.431 12:43:44 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.431 12:43:44 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:26.431 12:43:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.431 + '[' 2 -ne 2 ']' 00:04:26.431 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.431 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.431 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.431 +++ basename /dev/fd/62 00:04:26.431 ++ mktemp /tmp/62.XXX 00:04:26.431 + tmp_file_1=/tmp/62.geI 00:04:26.431 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.431 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.431 + tmp_file_2=/tmp/spdk_tgt_config.json.TnS 00:04:26.431 + ret=0 00:04:26.431 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.024 + diff -u /tmp/62.geI /tmp/spdk_tgt_config.json.TnS 00:04:27.024 + ret=1 00:04:27.024 + echo '=== Start of file: /tmp/62.geI ===' 00:04:27.024 + cat /tmp/62.geI 00:04:27.024 + echo '=== End of file: /tmp/62.geI ===' 00:04:27.024 + echo '' 00:04:27.024 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TnS ===' 00:04:27.024 + cat /tmp/spdk_tgt_config.json.TnS 00:04:27.024 + echo '=== End of file: /tmp/spdk_tgt_config.json.TnS ===' 00:04:27.024 + echo '' 00:04:27.024 + rm /tmp/62.geI /tmp/spdk_tgt_config.json.TnS 00:04:27.024 + exit 1 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:27.024 INFO: configuration change detected. 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 3270777 ]] 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:27.024 12:43:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.024 12:43:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.024 12:43:45 json_config -- json_config/json_config.sh@323 -- # killprocess 3270777 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@948 -- # '[' -z 3270777 ']' 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@952 -- # kill -0 3270777 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@953 -- # uname 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270777 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270777' 00:04:27.024 killing process with pid 3270777 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@967 -- # kill 3270777 00:04:27.024 12:43:45 json_config -- common/autotest_common.sh@972 -- # wait 3270777 00:04:28.929 12:43:46 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.929 12:43:46 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:28.929 12:43:46 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.929 12:43:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.929 12:43:46 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:28.929 12:43:46 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:28.929 INFO: Success 00:04:28.929 00:04:28.929 real 0m15.854s 00:04:28.929 user 0m17.531s 00:04:28.929 sys 0m2.063s 00:04:28.929 12:43:46 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.929 12:43:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.929 ************************************ 00:04:28.929 END TEST json_config 00:04:28.929 ************************************ 00:04:28.929 12:43:46 -- common/autotest_common.sh@1142 -- # return 0 00:04:28.929 12:43:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.929 12:43:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.929 12:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.929 12:43:46 -- common/autotest_common.sh@10 -- # set +x 00:04:28.929 ************************************ 00:04:28.929 START TEST json_config_extra_key 00:04:28.929 ************************************ 00:04:28.929 12:43:46 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:28.929 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.929 12:43:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.929 12:43:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.929 12:43:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.929 12:43:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.929 12:43:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.929 12:43:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.929 12:43:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:28.929 12:43:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.929 12:43:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.930 12:43:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:28.930 12:43:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:28.930 12:43:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:28.930 INFO: launching applications... 00:04:28.930 12:43:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3271685 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.930 Waiting for target to run... 00:04:28.930 12:43:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3271685 /var/tmp/spdk_tgt.sock 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3271685 ']' 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.930 12:43:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:28.930 [2024-07-15 12:43:46.882238] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:28.930 [2024-07-15 12:43:46.882337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271685 ] 00:04:28.930 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.190 [2024-07-15 12:43:47.221099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.190 [2024-07-15 12:43:47.298591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.759 12:43:47 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.760 12:43:47 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:29.760 00:04:29.760 12:43:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:29.760 INFO: shutting down applications... 00:04:29.760 12:43:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3271685 ]] 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3271685 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3271685 00:04:29.760 12:43:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3271685 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.331 12:43:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.331 SPDK target shutdown done 00:04:30.331 12:43:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:30.331 Success 00:04:30.331 00:04:30.331 real 0m1.542s 00:04:30.331 user 0m1.527s 00:04:30.331 sys 0m0.427s 00:04:30.331 12:43:48 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.331 12:43:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:30.331 ************************************ 00:04:30.331 END TEST json_config_extra_key 00:04:30.331 ************************************ 00:04:30.331 12:43:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.331 12:43:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.331 12:43:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.331 12:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.331 12:43:48 -- common/autotest_common.sh@10 -- # set +x 00:04:30.331 ************************************ 00:04:30.331 START TEST alias_rpc 00:04:30.331 ************************************ 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:30.331 * Looking for test storage... 00:04:30.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:30.331 12:43:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:30.331 12:43:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3271877 00:04:30.331 12:43:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.331 12:43:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3271877 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3271877 ']' 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.331 12:43:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.331 [2024-07-15 12:43:48.472646] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:30.331 [2024-07-15 12:43:48.472736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271877 ] 00:04:30.331 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.592 [2024-07-15 12:43:48.547397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.592 [2024-07-15 12:43:48.688730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.850 12:43:48 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.850 12:43:48 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:30.850 12:43:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:31.111 12:43:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3271877 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3271877 ']' 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3271877 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3271877 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3271877' 00:04:31.111 killing process with pid 3271877 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@967 -- # kill 3271877 00:04:31.111 12:43:49 alias_rpc -- common/autotest_common.sh@972 -- # wait 3271877 00:04:31.678 00:04:31.678 real 0m1.292s 00:04:31.678 user 0m1.451s 00:04:31.678 sys 0m0.443s 00:04:31.678 12:43:49 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.678 12:43:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.678 ************************************ 00:04:31.678 END TEST alias_rpc 00:04:31.678 ************************************ 00:04:31.679 12:43:49 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.679 12:43:49 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:31.679 12:43:49 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.679 12:43:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.679 12:43:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.679 12:43:49 -- common/autotest_common.sh@10 -- # set +x 00:04:31.679 ************************************ 00:04:31.679 START TEST spdkcli_tcp 00:04:31.679 ************************************ 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:31.679 * Looking for test storage... 00:04:31.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3272068 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:31.679 12:43:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3272068 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3272068 ']' 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.679 12:43:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:31.679 [2024-07-15 12:43:49.819776] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:31.679 [2024-07-15 12:43:49.819879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272068 ] 00:04:31.679 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.679 [2024-07-15 12:43:49.877754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:31.939 [2024-07-15 12:43:49.986811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.939 [2024-07-15 12:43:49.986816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.198 12:43:50 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.198 12:43:50 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:32.198 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3272196 00:04:32.198 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:32.198 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:32.456 [ 00:04:32.456 "bdev_malloc_delete", 00:04:32.456 "bdev_malloc_create", 00:04:32.456 "bdev_null_resize", 00:04:32.456 "bdev_null_delete", 00:04:32.456 "bdev_null_create", 00:04:32.456 "bdev_nvme_cuse_unregister", 00:04:32.456 "bdev_nvme_cuse_register", 00:04:32.456 "bdev_opal_new_user", 00:04:32.456 "bdev_opal_set_lock_state", 00:04:32.456 "bdev_opal_delete", 00:04:32.456 "bdev_opal_get_info", 00:04:32.456 "bdev_opal_create", 00:04:32.456 "bdev_nvme_opal_revert", 00:04:32.456 "bdev_nvme_opal_init", 00:04:32.456 "bdev_nvme_send_cmd", 00:04:32.456 "bdev_nvme_get_path_iostat", 00:04:32.456 "bdev_nvme_get_mdns_discovery_info", 00:04:32.456 "bdev_nvme_stop_mdns_discovery", 00:04:32.456 "bdev_nvme_start_mdns_discovery", 00:04:32.456 "bdev_nvme_set_multipath_policy", 00:04:32.456 "bdev_nvme_set_preferred_path", 00:04:32.456 "bdev_nvme_get_io_paths", 00:04:32.456 "bdev_nvme_remove_error_injection", 00:04:32.456 "bdev_nvme_add_error_injection", 00:04:32.456 "bdev_nvme_get_discovery_info", 00:04:32.456 "bdev_nvme_stop_discovery", 00:04:32.456 "bdev_nvme_start_discovery", 00:04:32.456 "bdev_nvme_get_controller_health_info", 00:04:32.456 "bdev_nvme_disable_controller", 00:04:32.456 "bdev_nvme_enable_controller", 00:04:32.456 "bdev_nvme_reset_controller", 00:04:32.456 "bdev_nvme_get_transport_statistics", 00:04:32.456 "bdev_nvme_apply_firmware", 00:04:32.456 "bdev_nvme_detach_controller", 00:04:32.456 "bdev_nvme_get_controllers", 00:04:32.456 "bdev_nvme_attach_controller", 00:04:32.456 "bdev_nvme_set_hotplug", 00:04:32.456 "bdev_nvme_set_options", 00:04:32.456 "bdev_passthru_delete", 00:04:32.456 "bdev_passthru_create", 00:04:32.456 "bdev_lvol_set_parent_bdev", 00:04:32.456 "bdev_lvol_set_parent", 00:04:32.456 "bdev_lvol_check_shallow_copy", 00:04:32.456 "bdev_lvol_start_shallow_copy", 00:04:32.456 "bdev_lvol_grow_lvstore", 00:04:32.456 "bdev_lvol_get_lvols", 00:04:32.456 "bdev_lvol_get_lvstores", 00:04:32.456 "bdev_lvol_delete", 00:04:32.456 "bdev_lvol_set_read_only", 00:04:32.456 "bdev_lvol_resize", 00:04:32.456 "bdev_lvol_decouple_parent", 00:04:32.456 "bdev_lvol_inflate", 00:04:32.456 "bdev_lvol_rename", 00:04:32.456 "bdev_lvol_clone_bdev", 00:04:32.456 "bdev_lvol_clone", 00:04:32.456 "bdev_lvol_snapshot", 00:04:32.456 "bdev_lvol_create", 00:04:32.456 "bdev_lvol_delete_lvstore", 00:04:32.456 "bdev_lvol_rename_lvstore", 00:04:32.456 "bdev_lvol_create_lvstore", 00:04:32.456 "bdev_raid_set_options", 00:04:32.456 "bdev_raid_remove_base_bdev", 00:04:32.456 "bdev_raid_add_base_bdev", 00:04:32.456 "bdev_raid_delete", 00:04:32.456 "bdev_raid_create", 00:04:32.456 "bdev_raid_get_bdevs", 00:04:32.456 "bdev_error_inject_error", 00:04:32.456 "bdev_error_delete", 00:04:32.456 "bdev_error_create", 00:04:32.456 "bdev_split_delete", 00:04:32.456 "bdev_split_create", 00:04:32.456 "bdev_delay_delete", 00:04:32.456 "bdev_delay_create", 00:04:32.456 "bdev_delay_update_latency", 00:04:32.456 "bdev_zone_block_delete", 00:04:32.456 "bdev_zone_block_create", 00:04:32.456 "blobfs_create", 00:04:32.456 "blobfs_detect", 00:04:32.456 "blobfs_set_cache_size", 00:04:32.456 "bdev_aio_delete", 00:04:32.456 "bdev_aio_rescan", 00:04:32.456 "bdev_aio_create", 00:04:32.456 "bdev_ftl_set_property", 00:04:32.456 "bdev_ftl_get_properties", 00:04:32.456 "bdev_ftl_get_stats", 00:04:32.456 "bdev_ftl_unmap", 00:04:32.456 "bdev_ftl_unload", 00:04:32.456 "bdev_ftl_delete", 00:04:32.456 "bdev_ftl_load", 00:04:32.456 "bdev_ftl_create", 00:04:32.456 "bdev_virtio_attach_controller", 00:04:32.456 "bdev_virtio_scsi_get_devices", 00:04:32.456 "bdev_virtio_detach_controller", 00:04:32.456 "bdev_virtio_blk_set_hotplug", 00:04:32.456 "bdev_iscsi_delete", 00:04:32.456 "bdev_iscsi_create", 00:04:32.456 "bdev_iscsi_set_options", 00:04:32.456 "accel_error_inject_error", 00:04:32.456 "ioat_scan_accel_module", 00:04:32.456 "dsa_scan_accel_module", 00:04:32.456 "iaa_scan_accel_module", 00:04:32.456 "vfu_virtio_create_scsi_endpoint", 00:04:32.456 "vfu_virtio_scsi_remove_target", 00:04:32.456 "vfu_virtio_scsi_add_target", 00:04:32.456 "vfu_virtio_create_blk_endpoint", 00:04:32.456 "vfu_virtio_delete_endpoint", 00:04:32.456 "keyring_file_remove_key", 00:04:32.456 "keyring_file_add_key", 00:04:32.456 "keyring_linux_set_options", 00:04:32.456 "iscsi_get_histogram", 00:04:32.456 "iscsi_enable_histogram", 00:04:32.456 "iscsi_set_options", 00:04:32.456 "iscsi_get_auth_groups", 00:04:32.456 "iscsi_auth_group_remove_secret", 00:04:32.456 "iscsi_auth_group_add_secret", 00:04:32.456 "iscsi_delete_auth_group", 00:04:32.456 "iscsi_create_auth_group", 00:04:32.456 "iscsi_set_discovery_auth", 00:04:32.456 "iscsi_get_options", 00:04:32.456 "iscsi_target_node_request_logout", 00:04:32.456 "iscsi_target_node_set_redirect", 00:04:32.456 "iscsi_target_node_set_auth", 00:04:32.456 "iscsi_target_node_add_lun", 00:04:32.456 "iscsi_get_stats", 00:04:32.456 "iscsi_get_connections", 00:04:32.456 "iscsi_portal_group_set_auth", 00:04:32.456 "iscsi_start_portal_group", 00:04:32.456 "iscsi_delete_portal_group", 00:04:32.456 "iscsi_create_portal_group", 00:04:32.456 "iscsi_get_portal_groups", 00:04:32.456 "iscsi_delete_target_node", 00:04:32.456 "iscsi_target_node_remove_pg_ig_maps", 00:04:32.456 "iscsi_target_node_add_pg_ig_maps", 00:04:32.456 "iscsi_create_target_node", 00:04:32.456 "iscsi_get_target_nodes", 00:04:32.456 "iscsi_delete_initiator_group", 00:04:32.456 "iscsi_initiator_group_remove_initiators", 00:04:32.456 "iscsi_initiator_group_add_initiators", 00:04:32.456 "iscsi_create_initiator_group", 00:04:32.456 "iscsi_get_initiator_groups", 00:04:32.456 "nvmf_set_crdt", 00:04:32.456 "nvmf_set_config", 00:04:32.456 "nvmf_set_max_subsystems", 00:04:32.456 "nvmf_stop_mdns_prr", 00:04:32.456 "nvmf_publish_mdns_prr", 00:04:32.456 "nvmf_subsystem_get_listeners", 00:04:32.456 "nvmf_subsystem_get_qpairs", 00:04:32.456 "nvmf_subsystem_get_controllers", 00:04:32.456 "nvmf_get_stats", 00:04:32.456 "nvmf_get_transports", 00:04:32.456 "nvmf_create_transport", 00:04:32.456 "nvmf_get_targets", 00:04:32.456 "nvmf_delete_target", 00:04:32.456 "nvmf_create_target", 00:04:32.456 "nvmf_subsystem_allow_any_host", 00:04:32.457 "nvmf_subsystem_remove_host", 00:04:32.457 "nvmf_subsystem_add_host", 00:04:32.457 "nvmf_ns_remove_host", 00:04:32.457 "nvmf_ns_add_host", 00:04:32.457 "nvmf_subsystem_remove_ns", 00:04:32.457 "nvmf_subsystem_add_ns", 00:04:32.457 "nvmf_subsystem_listener_set_ana_state", 00:04:32.457 "nvmf_discovery_get_referrals", 00:04:32.457 "nvmf_discovery_remove_referral", 00:04:32.457 "nvmf_discovery_add_referral", 00:04:32.457 "nvmf_subsystem_remove_listener", 00:04:32.457 "nvmf_subsystem_add_listener", 00:04:32.457 "nvmf_delete_subsystem", 00:04:32.457 "nvmf_create_subsystem", 00:04:32.457 "nvmf_get_subsystems", 00:04:32.457 "env_dpdk_get_mem_stats", 00:04:32.457 "nbd_get_disks", 00:04:32.457 "nbd_stop_disk", 00:04:32.457 "nbd_start_disk", 00:04:32.457 "ublk_recover_disk", 00:04:32.457 "ublk_get_disks", 00:04:32.457 "ublk_stop_disk", 00:04:32.457 "ublk_start_disk", 00:04:32.457 "ublk_destroy_target", 00:04:32.457 "ublk_create_target", 00:04:32.457 "virtio_blk_create_transport", 00:04:32.457 "virtio_blk_get_transports", 00:04:32.457 "vhost_controller_set_coalescing", 00:04:32.457 "vhost_get_controllers", 00:04:32.457 "vhost_delete_controller", 00:04:32.457 "vhost_create_blk_controller", 00:04:32.457 "vhost_scsi_controller_remove_target", 00:04:32.457 "vhost_scsi_controller_add_target", 00:04:32.457 "vhost_start_scsi_controller", 00:04:32.457 "vhost_create_scsi_controller", 00:04:32.457 "thread_set_cpumask", 00:04:32.457 "framework_get_governor", 00:04:32.457 "framework_get_scheduler", 00:04:32.457 "framework_set_scheduler", 00:04:32.457 "framework_get_reactors", 00:04:32.457 "thread_get_io_channels", 00:04:32.457 "thread_get_pollers", 00:04:32.457 "thread_get_stats", 00:04:32.457 "framework_monitor_context_switch", 00:04:32.457 "spdk_kill_instance", 00:04:32.457 "log_enable_timestamps", 00:04:32.457 "log_get_flags", 00:04:32.457 "log_clear_flag", 00:04:32.457 "log_set_flag", 00:04:32.457 "log_get_level", 00:04:32.457 "log_set_level", 00:04:32.457 "log_get_print_level", 00:04:32.457 "log_set_print_level", 00:04:32.457 "framework_enable_cpumask_locks", 00:04:32.457 "framework_disable_cpumask_locks", 00:04:32.457 "framework_wait_init", 00:04:32.457 "framework_start_init", 00:04:32.457 "scsi_get_devices", 00:04:32.457 "bdev_get_histogram", 00:04:32.457 "bdev_enable_histogram", 00:04:32.457 "bdev_set_qos_limit", 00:04:32.457 "bdev_set_qd_sampling_period", 00:04:32.457 "bdev_get_bdevs", 00:04:32.457 "bdev_reset_iostat", 00:04:32.457 "bdev_get_iostat", 00:04:32.457 "bdev_examine", 00:04:32.457 "bdev_wait_for_examine", 00:04:32.457 "bdev_set_options", 00:04:32.457 "notify_get_notifications", 00:04:32.457 "notify_get_types", 00:04:32.457 "accel_get_stats", 00:04:32.457 "accel_set_options", 00:04:32.457 "accel_set_driver", 00:04:32.457 "accel_crypto_key_destroy", 00:04:32.457 "accel_crypto_keys_get", 00:04:32.457 "accel_crypto_key_create", 00:04:32.457 "accel_assign_opc", 00:04:32.457 "accel_get_module_info", 00:04:32.457 "accel_get_opc_assignments", 00:04:32.457 "vmd_rescan", 00:04:32.457 "vmd_remove_device", 00:04:32.457 "vmd_enable", 00:04:32.457 "sock_get_default_impl", 00:04:32.457 "sock_set_default_impl", 00:04:32.457 "sock_impl_set_options", 00:04:32.457 "sock_impl_get_options", 00:04:32.457 "iobuf_get_stats", 00:04:32.457 "iobuf_set_options", 00:04:32.457 "keyring_get_keys", 00:04:32.457 "framework_get_pci_devices", 00:04:32.457 "framework_get_config", 00:04:32.457 "framework_get_subsystems", 00:04:32.457 "vfu_tgt_set_base_path", 00:04:32.457 "trace_get_info", 00:04:32.457 "trace_get_tpoint_group_mask", 00:04:32.457 "trace_disable_tpoint_group", 00:04:32.457 "trace_enable_tpoint_group", 00:04:32.457 "trace_clear_tpoint_mask", 00:04:32.457 "trace_set_tpoint_mask", 00:04:32.457 "spdk_get_version", 00:04:32.457 "rpc_get_methods" 00:04:32.457 ] 00:04:32.457 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:32.457 12:43:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3272068 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3272068 ']' 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3272068 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3272068 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3272068' 00:04:32.457 killing process with pid 3272068 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3272068 00:04:32.457 12:43:50 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3272068 00:04:33.023 00:04:33.023 real 0m1.249s 00:04:33.023 user 0m2.192s 00:04:33.023 sys 0m0.436s 00:04:33.023 12:43:50 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.023 12:43:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.023 ************************************ 00:04:33.023 END TEST spdkcli_tcp 00:04:33.023 ************************************ 00:04:33.023 12:43:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:33.023 12:43:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.023 12:43:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.023 12:43:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.023 12:43:50 -- common/autotest_common.sh@10 -- # set +x 00:04:33.023 ************************************ 00:04:33.023 START TEST dpdk_mem_utility 00:04:33.023 ************************************ 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:33.023 * Looking for test storage... 00:04:33.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:33.023 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.023 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3272277 00:04:33.023 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.023 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3272277 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3272277 ']' 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.023 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.023 [2024-07-15 12:43:51.111416] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:33.023 [2024-07-15 12:43:51.111501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272277 ] 00:04:33.023 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.023 [2024-07-15 12:43:51.172115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.282 [2024-07-15 12:43:51.279118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.541 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.541 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:33.541 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:33.541 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:33.541 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:33.541 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:33.541 { 00:04:33.541 "filename": "/tmp/spdk_mem_dump.txt" 00:04:33.541 } 00:04:33.541 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:33.541 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:33.541 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:33.541 1 heaps totaling size 814.000000 MiB 00:04:33.541 size: 814.000000 MiB heap id: 0 00:04:33.541 end heaps---------- 00:04:33.541 8 mempools totaling size 598.116089 MiB 00:04:33.541 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:33.541 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:33.541 size: 84.521057 MiB name: bdev_io_3272277 00:04:33.541 size: 51.011292 MiB name: evtpool_3272277 00:04:33.541 size: 50.003479 MiB name: msgpool_3272277 00:04:33.541 size: 21.763794 MiB name: PDU_Pool 00:04:33.541 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:33.541 size: 0.026123 MiB name: Session_Pool 00:04:33.541 end mempools------- 00:04:33.541 6 memzones totaling size 4.142822 MiB 00:04:33.541 size: 1.000366 MiB name: RG_ring_0_3272277 00:04:33.541 size: 1.000366 MiB name: RG_ring_1_3272277 00:04:33.541 size: 1.000366 MiB name: RG_ring_4_3272277 00:04:33.541 size: 1.000366 MiB name: RG_ring_5_3272277 00:04:33.541 size: 0.125366 MiB name: RG_ring_2_3272277 00:04:33.541 size: 0.015991 MiB name: RG_ring_3_3272277 00:04:33.541 end memzones------- 00:04:33.541 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:33.541 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:33.541 list of free elements. size: 12.519348 MiB 00:04:33.541 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:33.541 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:33.541 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:33.541 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:33.541 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:33.541 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:33.541 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:33.541 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:33.541 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:33.541 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:33.541 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:33.541 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:33.541 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:33.541 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:33.541 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:33.541 list of standard malloc elements. size: 199.218079 MiB 00:04:33.541 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:33.541 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:33.541 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:33.541 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:33.541 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:33.541 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:33.541 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:33.541 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:33.541 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:33.541 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:33.541 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:33.541 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:33.541 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:33.541 list of memzone associated elements. size: 602.262573 MiB 00:04:33.541 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:33.541 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:33.541 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:33.541 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:33.541 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:33.541 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3272277_0 00:04:33.541 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:33.541 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3272277_0 00:04:33.541 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:33.541 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3272277_0 00:04:33.541 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:33.541 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:33.542 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:33.542 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:33.542 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:33.542 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3272277 00:04:33.542 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:33.542 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3272277 00:04:33.542 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:33.542 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3272277 00:04:33.542 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:33.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:33.542 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:33.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:33.542 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:33.542 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:33.542 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:33.542 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:33.542 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:33.542 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3272277 00:04:33.542 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:33.542 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3272277 00:04:33.542 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:33.542 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3272277 00:04:33.542 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:33.542 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3272277 00:04:33.542 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:33.542 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3272277 00:04:33.542 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:33.542 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:33.542 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:33.542 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:33.542 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:33.542 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:33.542 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:33.542 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3272277 00:04:33.542 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:33.542 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:33.542 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:33.542 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:33.542 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:33.542 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3272277 00:04:33.542 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:33.542 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:33.542 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:33.542 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3272277 00:04:33.542 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:33.542 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3272277 00:04:33.542 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:33.542 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:33.542 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:33.542 12:43:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3272277 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3272277 ']' 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3272277 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3272277 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3272277' 00:04:33.542 killing process with pid 3272277 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3272277 00:04:33.542 12:43:51 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3272277 00:04:34.108 00:04:34.108 real 0m1.086s 00:04:34.108 user 0m1.048s 00:04:34.108 sys 0m0.414s 00:04:34.108 12:43:52 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.108 12:43:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.108 ************************************ 00:04:34.108 END TEST dpdk_mem_utility 00:04:34.108 ************************************ 00:04:34.108 12:43:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.108 12:43:52 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.108 12:43:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.108 12:43:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.108 12:43:52 -- common/autotest_common.sh@10 -- # set +x 00:04:34.108 ************************************ 00:04:34.108 START TEST event 00:04:34.108 ************************************ 00:04:34.108 12:43:52 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:34.108 * Looking for test storage... 00:04:34.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:34.108 12:43:52 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:34.108 12:43:52 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:34.108 12:43:52 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.108 12:43:52 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:34.108 12:43:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.108 12:43:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.108 ************************************ 00:04:34.108 START TEST event_perf 00:04:34.108 ************************************ 00:04:34.108 12:43:52 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:34.108 Running I/O for 1 seconds...[2024-07-15 12:43:52.218339] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:34.108 [2024-07-15 12:43:52.218408] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272482 ] 00:04:34.108 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.108 [2024-07-15 12:43:52.278486] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:34.367 [2024-07-15 12:43:52.393280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.367 [2024-07-15 12:43:52.393335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.367 [2024-07-15 12:43:52.393401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.367 [2024-07-15 12:43:52.393404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.301 Running I/O for 1 seconds... 00:04:35.301 lcore 0: 233030 00:04:35.301 lcore 1: 233030 00:04:35.301 lcore 2: 233029 00:04:35.301 lcore 3: 233030 00:04:35.301 done. 00:04:35.301 00:04:35.301 real 0m1.301s 00:04:35.301 user 0m4.208s 00:04:35.301 sys 0m0.088s 00:04:35.301 12:43:53 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.301 12:43:53 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.301 ************************************ 00:04:35.301 END TEST event_perf 00:04:35.301 ************************************ 00:04:35.559 12:43:53 event -- common/autotest_common.sh@1142 -- # return 0 00:04:35.559 12:43:53 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.559 12:43:53 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:35.559 12:43:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.559 12:43:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.559 ************************************ 00:04:35.559 START TEST event_reactor 00:04:35.559 ************************************ 00:04:35.559 12:43:53 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:35.559 [2024-07-15 12:43:53.566972] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:35.559 [2024-07-15 12:43:53.567045] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272737 ] 00:04:35.559 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.559 [2024-07-15 12:43:53.623918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.559 [2024-07-15 12:43:53.729180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.930 test_start 00:04:36.930 oneshot 00:04:36.930 tick 100 00:04:36.930 tick 100 00:04:36.930 tick 250 00:04:36.930 tick 100 00:04:36.930 tick 100 00:04:36.930 tick 100 00:04:36.930 tick 250 00:04:36.930 tick 500 00:04:36.930 tick 100 00:04:36.930 tick 100 00:04:36.930 tick 250 00:04:36.930 tick 100 00:04:36.930 tick 100 00:04:36.930 test_end 00:04:36.930 00:04:36.930 real 0m1.286s 00:04:36.930 user 0m1.205s 00:04:36.930 sys 0m0.077s 00:04:36.930 12:43:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.930 12:43:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 ************************************ 00:04:36.930 END TEST event_reactor 00:04:36.930 ************************************ 00:04:36.930 12:43:54 event -- common/autotest_common.sh@1142 -- # return 0 00:04:36.930 12:43:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.930 12:43:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:36.930 12:43:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.930 12:43:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 ************************************ 00:04:36.930 START TEST event_reactor_perf 00:04:36.930 ************************************ 00:04:36.930 12:43:54 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:36.930 [2024-07-15 12:43:54.903382] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:36.930 [2024-07-15 12:43:54.903449] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3272891 ] 00:04:36.930 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.930 [2024-07-15 12:43:54.960383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.930 [2024-07-15 12:43:55.067415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.309 test_start 00:04:38.309 test_end 00:04:38.309 Performance: 447229 events per second 00:04:38.309 00:04:38.309 real 0m1.287s 00:04:38.309 user 0m1.207s 00:04:38.309 sys 0m0.076s 00:04:38.309 12:43:56 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.309 12:43:56 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.309 ************************************ 00:04:38.309 END TEST event_reactor_perf 00:04:38.309 ************************************ 00:04:38.309 12:43:56 event -- common/autotest_common.sh@1142 -- # return 0 00:04:38.309 12:43:56 event -- event/event.sh@49 -- # uname -s 00:04:38.309 12:43:56 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:38.309 12:43:56 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.309 12:43:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.309 12:43:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.309 12:43:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.309 ************************************ 00:04:38.309 START TEST event_scheduler 00:04:38.309 ************************************ 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:38.309 * Looking for test storage... 00:04:38.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:38.309 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:38.309 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3273080 00:04:38.309 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:38.309 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:38.309 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3273080 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3273080 ']' 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.309 12:43:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.309 [2024-07-15 12:43:56.323489] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:38.309 [2024-07-15 12:43:56.323561] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273080 ] 00:04:38.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.309 [2024-07-15 12:43:56.380573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.309 [2024-07-15 12:43:56.489229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.309 [2024-07-15 12:43:56.489286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.309 [2024-07-15 12:43:56.489355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.309 [2024-07-15 12:43:56.489358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:38.568 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.568 [2024-07-15 12:43:56.526110] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:38.568 [2024-07-15 12:43:56.526136] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:38.568 [2024-07-15 12:43:56.526152] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:38.568 [2024-07-15 12:43:56.526179] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:38.568 [2024-07-15 12:43:56.526189] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.568 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.568 [2024-07-15 12:43:56.622995] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.568 12:43:56 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.568 12:43:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:38.568 ************************************ 00:04:38.568 START TEST scheduler_create_thread 00:04:38.568 ************************************ 00:04:38.568 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 2 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 3 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 4 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 5 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 6 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 7 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 8 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 9 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 10 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.569 12:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.138 12:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.138 00:04:39.138 real 0m0.588s 00:04:39.138 user 0m0.009s 00:04:39.138 sys 0m0.005s 00:04:39.138 12:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.138 12:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.138 ************************************ 00:04:39.138 END TEST scheduler_create_thread 00:04:39.138 ************************************ 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:39.138 12:43:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:39.138 12:43:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3273080 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3273080 ']' 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3273080 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3273080 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3273080' 00:04:39.138 killing process with pid 3273080 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3273080 00:04:39.138 12:43:57 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3273080 00:04:39.706 [2024-07-15 12:43:57.719108] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:39.965 00:04:39.965 real 0m1.752s 00:04:39.965 user 0m2.180s 00:04:39.965 sys 0m0.310s 00:04:39.965 12:43:57 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.965 12:43:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:39.965 ************************************ 00:04:39.965 END TEST event_scheduler 00:04:39.965 ************************************ 00:04:39.965 12:43:58 event -- common/autotest_common.sh@1142 -- # return 0 00:04:39.965 12:43:58 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:39.965 12:43:58 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:39.965 12:43:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.965 12:43:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.965 12:43:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.965 ************************************ 00:04:39.965 START TEST app_repeat 00:04:39.965 ************************************ 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3273385 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3273385' 00:04:39.965 Process app_repeat pid: 3273385 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:39.965 spdk_app_start Round 0 00:04:39.965 12:43:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273385 /var/tmp/spdk-nbd.sock 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3273385 ']' 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:39.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.965 12:43:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:39.965 [2024-07-15 12:43:58.060272] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:39.965 [2024-07-15 12:43:58.060341] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3273385 ] 00:04:39.965 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.965 [2024-07-15 12:43:58.118434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:40.223 [2024-07-15 12:43:58.220375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.223 [2024-07-15 12:43:58.220379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.223 12:43:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.223 12:43:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:40.223 12:43:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.480 Malloc0 00:04:40.480 12:43:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.738 Malloc1 00:04:40.738 12:43:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.738 12:43:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.995 /dev/nbd0 00:04:40.995 12:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.995 12:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:40.995 12:43:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.995 1+0 records in 00:04:40.995 1+0 records out 00:04:40.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001654 s, 24.8 MB/s 00:04:40.996 12:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.996 12:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:40.996 12:43:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:40.996 12:43:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:40.996 12:43:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:40.996 12:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.996 12:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.996 12:43:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:41.253 /dev/nbd1 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:41.253 1+0 records in 00:04:41.253 1+0 records out 00:04:41.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232777 s, 17.6 MB/s 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:41.253 12:43:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.253 12:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:41.511 { 00:04:41.511 "nbd_device": "/dev/nbd0", 00:04:41.511 "bdev_name": "Malloc0" 00:04:41.511 }, 00:04:41.511 { 00:04:41.511 "nbd_device": "/dev/nbd1", 00:04:41.511 "bdev_name": "Malloc1" 00:04:41.511 } 00:04:41.511 ]' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:41.511 { 00:04:41.511 "nbd_device": "/dev/nbd0", 00:04:41.511 "bdev_name": "Malloc0" 00:04:41.511 }, 00:04:41.511 { 00:04:41.511 "nbd_device": "/dev/nbd1", 00:04:41.511 "bdev_name": "Malloc1" 00:04:41.511 } 00:04:41.511 ]' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:41.511 /dev/nbd1' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:41.511 /dev/nbd1' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:41.511 12:43:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:41.511 256+0 records in 00:04:41.511 256+0 records out 00:04:41.511 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049909 s, 210 MB/s 00:04:41.512 12:43:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.512 12:43:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:41.769 256+0 records in 00:04:41.769 256+0 records out 00:04:41.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021221 s, 49.4 MB/s 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:41.769 256+0 records in 00:04:41.769 256+0 records out 00:04:41.769 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223828 s, 46.8 MB/s 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.769 12:43:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:42.027 12:44:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.285 12:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:42.542 12:44:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:42.542 12:44:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:42.801 12:44:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:43.059 [2024-07-15 12:44:01.123834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.059 [2024-07-15 12:44:01.234699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.059 [2024-07-15 12:44:01.234699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.319 [2024-07-15 12:44:01.289408] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:43.319 [2024-07-15 12:44:01.289462] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.846 12:44:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.846 12:44:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:45.846 spdk_app_start Round 1 00:04:45.846 12:44:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273385 /var/tmp/spdk-nbd.sock 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3273385 ']' 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:45.846 12:44:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.105 12:44:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.105 12:44:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:46.105 12:44:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.382 Malloc0 00:04:46.383 12:44:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.641 Malloc1 00:04:46.641 12:44:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.641 12:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.900 /dev/nbd0 00:04:46.900 12:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.900 12:44:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.900 1+0 records in 00:04:46.900 1+0 records out 00:04:46.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203385 s, 20.1 MB/s 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:46.900 12:44:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:46.900 12:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.900 12:44:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.900 12:44:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:47.188 /dev/nbd1 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.188 1+0 records in 00:04:47.188 1+0 records out 00:04:47.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021957 s, 18.7 MB/s 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.188 12:44:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.188 12:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:47.470 { 00:04:47.470 "nbd_device": "/dev/nbd0", 00:04:47.470 "bdev_name": "Malloc0" 00:04:47.470 }, 00:04:47.470 { 00:04:47.470 "nbd_device": "/dev/nbd1", 00:04:47.470 "bdev_name": "Malloc1" 00:04:47.470 } 00:04:47.470 ]' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:47.470 { 00:04:47.470 "nbd_device": "/dev/nbd0", 00:04:47.470 "bdev_name": "Malloc0" 00:04:47.470 }, 00:04:47.470 { 00:04:47.470 "nbd_device": "/dev/nbd1", 00:04:47.470 "bdev_name": "Malloc1" 00:04:47.470 } 00:04:47.470 ]' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:47.470 /dev/nbd1' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:47.470 /dev/nbd1' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:47.470 256+0 records in 00:04:47.470 256+0 records out 00:04:47.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504034 s, 208 MB/s 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:47.470 256+0 records in 00:04:47.470 256+0 records out 00:04:47.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208198 s, 50.4 MB/s 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:47.470 256+0 records in 00:04:47.470 256+0 records out 00:04:47.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225874 s, 46.4 MB/s 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.470 12:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.729 12:44:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.987 12:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:48.245 12:44:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:48.245 12:44:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:48.504 12:44:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:48.763 [2024-07-15 12:44:06.921099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.022 [2024-07-15 12:44:07.025027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.022 [2024-07-15 12:44:07.025028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.022 [2024-07-15 12:44:07.079091] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.022 [2024-07-15 12:44:07.079167] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:51.558 12:44:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.559 12:44:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:51.559 spdk_app_start Round 2 00:04:51.559 12:44:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3273385 /var/tmp/spdk-nbd.sock 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3273385 ']' 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.559 12:44:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.817 12:44:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.817 12:44:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:51.817 12:44:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.075 Malloc0 00:04:52.075 12:44:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.334 Malloc1 00:04:52.334 12:44:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.334 12:44:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.594 /dev/nbd0 00:04:52.594 12:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.594 12:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.594 1+0 records in 00:04:52.594 1+0 records out 00:04:52.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186828 s, 21.9 MB/s 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.594 12:44:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.594 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.594 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.594 12:44:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.850 /dev/nbd1 00:04:52.850 12:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.850 12:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.850 12:44:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:52.850 12:44:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:52.850 12:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.851 1+0 records in 00:04:52.851 1+0 records out 00:04:52.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197353 s, 20.8 MB/s 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:52.851 12:44:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:52.851 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.851 12:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.851 12:44:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.851 12:44:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.851 12:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.107 { 00:04:53.107 "nbd_device": "/dev/nbd0", 00:04:53.107 "bdev_name": "Malloc0" 00:04:53.107 }, 00:04:53.107 { 00:04:53.107 "nbd_device": "/dev/nbd1", 00:04:53.107 "bdev_name": "Malloc1" 00:04:53.107 } 00:04:53.107 ]' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.107 { 00:04:53.107 "nbd_device": "/dev/nbd0", 00:04:53.107 "bdev_name": "Malloc0" 00:04:53.107 }, 00:04:53.107 { 00:04:53.107 "nbd_device": "/dev/nbd1", 00:04:53.107 "bdev_name": "Malloc1" 00:04:53.107 } 00:04:53.107 ]' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.107 /dev/nbd1' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.107 /dev/nbd1' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.107 256+0 records in 00:04:53.107 256+0 records out 00:04:53.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380859 s, 275 MB/s 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.107 256+0 records in 00:04:53.107 256+0 records out 00:04:53.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207105 s, 50.6 MB/s 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.107 12:44:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.365 256+0 records in 00:04:53.365 256+0 records out 00:04:53.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238765 s, 43.9 MB/s 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.365 12:44:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.366 12:44:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.623 12:44:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.880 12:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.137 12:44:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.137 12:44:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.396 12:44:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.656 [2024-07-15 12:44:12.679996] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.656 [2024-07-15 12:44:12.781641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.656 [2024-07-15 12:44:12.781642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.656 [2024-07-15 12:44:12.838891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.656 [2024-07-15 12:44:12.838971] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.946 12:44:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3273385 /var/tmp/spdk-nbd.sock 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3273385 ']' 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:57.946 12:44:15 event.app_repeat -- event/event.sh@39 -- # killprocess 3273385 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3273385 ']' 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3273385 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3273385 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3273385' 00:04:57.946 killing process with pid 3273385 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3273385 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3273385 00:04:57.946 spdk_app_start is called in Round 0. 00:04:57.946 Shutdown signal received, stop current app iteration 00:04:57.946 Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 reinitialization... 00:04:57.946 spdk_app_start is called in Round 1. 00:04:57.946 Shutdown signal received, stop current app iteration 00:04:57.946 Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 reinitialization... 00:04:57.946 spdk_app_start is called in Round 2. 00:04:57.946 Shutdown signal received, stop current app iteration 00:04:57.946 Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 reinitialization... 00:04:57.946 spdk_app_start is called in Round 3. 00:04:57.946 Shutdown signal received, stop current app iteration 00:04:57.946 12:44:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:57.946 12:44:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:57.946 00:04:57.946 real 0m17.911s 00:04:57.946 user 0m38.833s 00:04:57.946 sys 0m3.171s 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.946 12:44:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 ************************************ 00:04:57.946 END TEST app_repeat 00:04:57.946 ************************************ 00:04:57.946 12:44:15 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.946 12:44:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:57.946 12:44:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.946 12:44:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.946 12:44:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.946 12:44:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.946 ************************************ 00:04:57.946 START TEST cpu_locks 00:04:57.946 ************************************ 00:04:57.946 12:44:15 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:57.946 * Looking for test storage... 00:04:57.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:57.946 12:44:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:57.946 12:44:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:57.947 12:44:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:57.947 12:44:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:57.947 12:44:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:57.947 12:44:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.947 12:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.947 ************************************ 00:04:57.947 START TEST default_locks 00:04:57.947 ************************************ 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3275747 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3275747 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3275747 ']' 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.947 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.947 [2024-07-15 12:44:16.134568] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:57.947 [2024-07-15 12:44:16.134663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275747 ] 00:04:58.206 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.207 [2024-07-15 12:44:16.192631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.207 [2024-07-15 12:44:16.305671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.466 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.466 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:58.466 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3275747 00:04:58.466 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3275747 00:04:58.466 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.726 lslocks: write error 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3275747 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3275747 ']' 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3275747 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3275747 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3275747' 00:04:58.726 killing process with pid 3275747 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3275747 00:04:58.726 12:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3275747 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3275747 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3275747 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3275747 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3275747 ']' 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3275747) - No such process 00:04:59.294 ERROR: process (pid: 3275747) is no longer running 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.294 00:04:59.294 real 0m1.191s 00:04:59.294 user 0m1.126s 00:04:59.294 sys 0m0.512s 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.294 12:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.294 ************************************ 00:04:59.294 END TEST default_locks 00:04:59.294 ************************************ 00:04:59.294 12:44:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:04:59.294 12:44:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:59.294 12:44:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.294 12:44:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.294 12:44:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.294 ************************************ 00:04:59.294 START TEST default_locks_via_rpc 00:04:59.294 ************************************ 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3275918 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3275918 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3275918 ']' 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.294 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.294 [2024-07-15 12:44:17.375283] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:04:59.294 [2024-07-15 12:44:17.375377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275918 ] 00:04:59.294 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.294 [2024-07-15 12:44:17.433817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.554 [2024-07-15 12:44:17.541156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3275918 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3275918 00:04:59.814 12:44:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3275918 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3275918 ']' 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3275918 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3275918 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3275918' 00:05:00.074 killing process with pid 3275918 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3275918 00:05:00.074 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3275918 00:05:00.332 00:05:00.332 real 0m1.210s 00:05:00.332 user 0m1.158s 00:05:00.332 sys 0m0.491s 00:05:00.332 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.332 12:44:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.332 ************************************ 00:05:00.332 END TEST default_locks_via_rpc 00:05:00.332 ************************************ 00:05:00.590 12:44:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:00.590 12:44:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:00.590 12:44:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.590 12:44:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.590 12:44:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.590 ************************************ 00:05:00.590 START TEST non_locking_app_on_locked_coremask 00:05:00.590 ************************************ 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3276087 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3276087 /var/tmp/spdk.sock 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276087 ']' 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.590 12:44:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:00.590 [2024-07-15 12:44:18.628012] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:00.590 [2024-07-15 12:44:18.628114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276087 ] 00:05:00.590 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.590 [2024-07-15 12:44:18.683432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.590 [2024-07-15 12:44:18.783217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.847 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.847 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:00.847 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3276091 00:05:00.847 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3276091 /var/tmp/spdk2.sock 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276091 ']' 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.848 12:44:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.106 [2024-07-15 12:44:19.079530] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:01.106 [2024-07-15 12:44:19.079621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276091 ] 00:05:01.106 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.106 [2024-07-15 12:44:19.162116] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:01.106 [2024-07-15 12:44:19.162142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.363 [2024-07-15 12:44:19.371458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.929 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.929 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:01.929 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3276087 00:05:01.929 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3276087 00:05:01.929 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.494 lslocks: write error 00:05:02.494 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3276087 00:05:02.494 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3276087 ']' 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3276087 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276087 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276087' 00:05:02.495 killing process with pid 3276087 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3276087 00:05:02.495 12:44:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3276087 00:05:03.433 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3276091 00:05:03.433 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3276091 ']' 00:05:03.433 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3276091 00:05:03.433 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276091 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276091' 00:05:03.434 killing process with pid 3276091 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3276091 00:05:03.434 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3276091 00:05:03.692 00:05:03.692 real 0m3.280s 00:05:03.692 user 0m3.461s 00:05:03.692 sys 0m1.019s 00:05:03.692 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.692 12:44:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.692 ************************************ 00:05:03.692 END TEST non_locking_app_on_locked_coremask 00:05:03.692 ************************************ 00:05:03.692 12:44:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:03.692 12:44:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:03.692 12:44:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.692 12:44:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.692 12:44:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.952 ************************************ 00:05:03.952 START TEST locking_app_on_unlocked_coremask 00:05:03.952 ************************************ 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3276518 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3276518 /var/tmp/spdk.sock 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276518 ']' 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.952 12:44:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.952 [2024-07-15 12:44:21.964862] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:03.952 [2024-07-15 12:44:21.964958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276518 ] 00:05:03.952 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.952 [2024-07-15 12:44:22.023947] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.952 [2024-07-15 12:44:22.023984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.952 [2024-07-15 12:44:22.134577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3276531 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3276531 /var/tmp/spdk2.sock 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276531 ']' 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.211 12:44:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.471 [2024-07-15 12:44:22.426573] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:04.471 [2024-07-15 12:44:22.426660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276531 ] 00:05:04.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.471 [2024-07-15 12:44:22.508365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.731 [2024-07-15 12:44:22.718468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.297 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.297 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:05.297 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3276531 00:05:05.297 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.297 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3276531 00:05:05.863 lslocks: write error 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3276518 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3276518 ']' 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3276518 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276518 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276518' 00:05:05.863 killing process with pid 3276518 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3276518 00:05:05.863 12:44:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3276518 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3276531 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3276531 ']' 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3276531 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276531 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276531' 00:05:06.804 killing process with pid 3276531 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3276531 00:05:06.804 12:44:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3276531 00:05:07.063 00:05:07.063 real 0m3.293s 00:05:07.063 user 0m3.492s 00:05:07.063 sys 0m1.017s 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 ************************************ 00:05:07.063 END TEST locking_app_on_unlocked_coremask 00:05:07.063 ************************************ 00:05:07.063 12:44:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.063 12:44:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:07.063 12:44:25 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.063 12:44:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.063 12:44:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.063 ************************************ 00:05:07.063 START TEST locking_app_on_locked_coremask 00:05:07.063 ************************************ 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3276948 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3276948 /var/tmp/spdk.sock 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276948 ']' 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.063 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.323 [2024-07-15 12:44:25.307100] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:07.323 [2024-07-15 12:44:25.307186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276948 ] 00:05:07.323 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.323 [2024-07-15 12:44:25.365909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.323 [2024-07-15 12:44:25.471101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3276965 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3276965 /var/tmp/spdk2.sock 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3276965 /var/tmp/spdk2.sock 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3276965 /var/tmp/spdk2.sock 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3276965 ']' 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.582 12:44:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.582 [2024-07-15 12:44:25.767193] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:07.582 [2024-07-15 12:44:25.767275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276965 ] 00:05:07.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.843 [2024-07-15 12:44:25.851659] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3276948 has claimed it. 00:05:07.843 [2024-07-15 12:44:25.851729] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:08.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3276965) - No such process 00:05:08.411 ERROR: process (pid: 3276965) is no longer running 00:05:08.411 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.411 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:08.411 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:08.411 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:08.412 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:08.412 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:08.412 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3276948 00:05:08.412 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3276948 00:05:08.412 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.670 lslocks: write error 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3276948 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3276948 ']' 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3276948 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.670 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3276948 00:05:08.928 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.928 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.928 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3276948' 00:05:08.928 killing process with pid 3276948 00:05:08.928 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3276948 00:05:08.928 12:44:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3276948 00:05:09.186 00:05:09.186 real 0m2.066s 00:05:09.186 user 0m2.238s 00:05:09.186 sys 0m0.635s 00:05:09.186 12:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.186 12:44:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.186 ************************************ 00:05:09.186 END TEST locking_app_on_locked_coremask 00:05:09.186 ************************************ 00:05:09.186 12:44:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:09.186 12:44:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:09.186 12:44:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.186 12:44:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.186 12:44:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.186 ************************************ 00:05:09.186 START TEST locking_overlapped_coremask 00:05:09.186 ************************************ 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3277242 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3277242 /var/tmp/spdk.sock 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3277242 ']' 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.186 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.444 [2024-07-15 12:44:27.427127] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:09.444 [2024-07-15 12:44:27.427211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277242 ] 00:05:09.444 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.444 [2024-07-15 12:44:27.483244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:09.444 [2024-07-15 12:44:27.590933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.444 [2024-07-15 12:44:27.590991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.444 [2024-07-15 12:44:27.590995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3277266 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3277266 /var/tmp/spdk2.sock 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3277266 /var/tmp/spdk2.sock 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3277266 /var/tmp/spdk2.sock 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3277266 ']' 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:09.701 12:44:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.701 [2024-07-15 12:44:27.896967] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:09.701 [2024-07-15 12:44:27.897061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277266 ] 00:05:09.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.960 [2024-07-15 12:44:27.986293] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3277242 has claimed it. 00:05:09.960 [2024-07-15 12:44:27.986367] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.528 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3277266) - No such process 00:05:10.528 ERROR: process (pid: 3277266) is no longer running 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3277242 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3277242 ']' 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3277242 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3277242 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3277242' 00:05:10.528 killing process with pid 3277242 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3277242 00:05:10.528 12:44:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3277242 00:05:11.144 00:05:11.144 real 0m1.675s 00:05:11.144 user 0m4.461s 00:05:11.144 sys 0m0.439s 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.144 ************************************ 00:05:11.144 END TEST locking_overlapped_coremask 00:05:11.144 ************************************ 00:05:11.144 12:44:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:11.144 12:44:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:11.144 12:44:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.144 12:44:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.144 12:44:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.144 ************************************ 00:05:11.144 START TEST locking_overlapped_coremask_via_rpc 00:05:11.144 ************************************ 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3277428 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3277428 /var/tmp/spdk.sock 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3277428 ']' 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.144 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.144 [2024-07-15 12:44:29.154963] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:11.144 [2024-07-15 12:44:29.155057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277428 ] 00:05:11.144 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.144 [2024-07-15 12:44:29.210777] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.144 [2024-07-15 12:44:29.210807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.144 [2024-07-15 12:44:29.316850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.144 [2024-07-15 12:44:29.317112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.144 [2024-07-15 12:44:29.317118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3277556 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3277556 /var/tmp/spdk2.sock 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3277556 ']' 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.402 12:44:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.660 [2024-07-15 12:44:29.623674] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:11.660 [2024-07-15 12:44:29.623770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277556 ] 00:05:11.660 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.660 [2024-07-15 12:44:29.711275] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:11.660 [2024-07-15 12:44:29.711317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.920 [2024-07-15 12:44:29.929650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.920 [2024-07-15 12:44:29.932807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:11.920 [2024-07-15 12:44:29.932809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.495 [2024-07-15 12:44:30.579844] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3277428 has claimed it. 00:05:12.495 request: 00:05:12.495 { 00:05:12.495 "method": "framework_enable_cpumask_locks", 00:05:12.495 "req_id": 1 00:05:12.495 } 00:05:12.495 Got JSON-RPC error response 00:05:12.495 response: 00:05:12.495 { 00:05:12.495 "code": -32603, 00:05:12.495 "message": "Failed to claim CPU core: 2" 00:05:12.495 } 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3277428 /var/tmp/spdk.sock 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3277428 ']' 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.495 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3277556 /var/tmp/spdk2.sock 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3277556 ']' 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.753 12:44:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.013 00:05:13.013 real 0m2.010s 00:05:13.013 user 0m1.041s 00:05:13.013 sys 0m0.180s 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.013 12:44:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.013 ************************************ 00:05:13.013 END TEST locking_overlapped_coremask_via_rpc 00:05:13.013 ************************************ 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.013 12:44:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:13.013 12:44:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3277428 ]] 00:05:13.013 12:44:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3277428 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3277428 ']' 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3277428 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3277428 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3277428' 00:05:13.013 killing process with pid 3277428 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3277428 00:05:13.013 12:44:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3277428 00:05:13.583 12:44:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3277556 ]] 00:05:13.583 12:44:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3277556 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3277556 ']' 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3277556 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3277556 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3277556' 00:05:13.583 killing process with pid 3277556 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3277556 00:05:13.583 12:44:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3277556 00:05:14.154 12:44:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.154 12:44:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:14.154 12:44:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3277428 ]] 00:05:14.154 12:44:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3277428 00:05:14.154 12:44:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3277428 ']' 00:05:14.154 12:44:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3277428 00:05:14.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3277428) - No such process 00:05:14.154 12:44:32 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3277428 is not found' 00:05:14.155 Process with pid 3277428 is not found 00:05:14.155 12:44:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3277556 ]] 00:05:14.155 12:44:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3277556 00:05:14.155 12:44:32 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3277556 ']' 00:05:14.155 12:44:32 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3277556 00:05:14.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3277556) - No such process 00:05:14.155 12:44:32 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3277556 is not found' 00:05:14.155 Process with pid 3277556 is not found 00:05:14.155 12:44:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:14.155 00:05:14.155 real 0m16.127s 00:05:14.155 user 0m28.193s 00:05:14.155 sys 0m5.208s 00:05:14.155 12:44:32 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.155 12:44:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.155 ************************************ 00:05:14.155 END TEST cpu_locks 00:05:14.155 ************************************ 00:05:14.155 12:44:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:14.155 00:05:14.155 real 0m40.010s 00:05:14.155 user 1m15.974s 00:05:14.155 sys 0m9.148s 00:05:14.155 12:44:32 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.155 12:44:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.155 ************************************ 00:05:14.155 END TEST event 00:05:14.155 ************************************ 00:05:14.155 12:44:32 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.155 12:44:32 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.155 12:44:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.155 12:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.155 12:44:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.155 ************************************ 00:05:14.155 START TEST thread 00:05:14.155 ************************************ 00:05:14.155 12:44:32 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:14.155 * Looking for test storage... 00:05:14.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:14.155 12:44:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.155 12:44:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:14.155 12:44:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.155 12:44:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.155 ************************************ 00:05:14.155 START TEST thread_poller_perf 00:05:14.155 ************************************ 00:05:14.155 12:44:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:14.155 [2024-07-15 12:44:32.277735] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:14.155 [2024-07-15 12:44:32.277832] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277928 ] 00:05:14.155 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.155 [2024-07-15 12:44:32.340884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.415 [2024-07-15 12:44:32.453379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.415 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:15.796 ====================================== 00:05:15.796 busy:2708753412 (cyc) 00:05:15.796 total_run_count: 366000 00:05:15.796 tsc_hz: 2700000000 (cyc) 00:05:15.796 ====================================== 00:05:15.796 poller_cost: 7400 (cyc), 2740 (nsec) 00:05:15.796 00:05:15.796 real 0m1.307s 00:05:15.796 user 0m1.219s 00:05:15.796 sys 0m0.080s 00:05:15.796 12:44:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.796 12:44:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:15.796 ************************************ 00:05:15.796 END TEST thread_poller_perf 00:05:15.796 ************************************ 00:05:15.797 12:44:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:15.797 12:44:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.797 12:44:33 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:15.797 12:44:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.797 12:44:33 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.797 ************************************ 00:05:15.797 START TEST thread_poller_perf 00:05:15.797 ************************************ 00:05:15.797 12:44:33 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:15.797 [2024-07-15 12:44:33.634332] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:15.797 [2024-07-15 12:44:33.634399] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278089 ] 00:05:15.797 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.797 [2024-07-15 12:44:33.694546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.797 [2024-07-15 12:44:33.797261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.797 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:16.733 ====================================== 00:05:16.733 busy:2702227107 (cyc) 00:05:16.733 total_run_count: 4862000 00:05:16.733 tsc_hz: 2700000000 (cyc) 00:05:16.733 ====================================== 00:05:16.733 poller_cost: 555 (cyc), 205 (nsec) 00:05:16.733 00:05:16.733 real 0m1.288s 00:05:16.733 user 0m1.203s 00:05:16.733 sys 0m0.079s 00:05:16.733 12:44:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.733 12:44:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 ************************************ 00:05:16.733 END TEST thread_poller_perf 00:05:16.733 ************************************ 00:05:16.733 12:44:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:16.733 12:44:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:16.733 00:05:16.733 real 0m2.744s 00:05:16.733 user 0m2.480s 00:05:16.733 sys 0m0.261s 00:05:16.733 12:44:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.733 12:44:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.733 ************************************ 00:05:16.733 END TEST thread 00:05:16.733 ************************************ 00:05:16.992 12:44:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.992 12:44:34 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.992 12:44:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.992 12:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.992 12:44:34 -- common/autotest_common.sh@10 -- # set +x 00:05:16.992 ************************************ 00:05:16.992 START TEST accel 00:05:16.992 ************************************ 00:05:16.992 12:44:34 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:16.992 * Looking for test storage... 00:05:16.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:16.992 12:44:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:16.992 12:44:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:16.992 12:44:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:16.992 12:44:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3278364 00:05:16.992 12:44:35 accel -- accel/accel.sh@63 -- # waitforlisten 3278364 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@829 -- # '[' -z 3278364 ']' 00:05:16.992 12:44:35 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.992 12:44:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.992 12:44:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.992 12:44:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.992 12:44:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.992 12:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.992 12:44:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.992 12:44:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.992 12:44:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:16.992 12:44:35 accel -- accel/accel.sh@41 -- # jq -r . 00:05:16.992 [2024-07-15 12:44:35.091277] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:16.992 [2024-07-15 12:44:35.091360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278364 ] 00:05:16.992 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.992 [2024-07-15 12:44:35.148609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.250 [2024-07-15 12:44:35.254994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@862 -- # return 0 00:05:17.511 12:44:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:17.511 12:44:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:17.511 12:44:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:17.511 12:44:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:17.511 12:44:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:17.511 12:44:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.511 12:44:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # IFS== 00:05:17.511 12:44:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:17.511 12:44:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:17.511 12:44:35 accel -- accel/accel.sh@75 -- # killprocess 3278364 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@948 -- # '[' -z 3278364 ']' 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@952 -- # kill -0 3278364 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@953 -- # uname 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3278364 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3278364' 00:05:17.511 killing process with pid 3278364 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@967 -- # kill 3278364 00:05:17.511 12:44:35 accel -- common/autotest_common.sh@972 -- # wait 3278364 00:05:18.082 12:44:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:18.082 12:44:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:18.082 12:44:35 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.082 12:44:36 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:18.082 12:44:36 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:18.082 12:44:36 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.082 12:44:36 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.082 12:44:36 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.082 12:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.082 ************************************ 00:05:18.082 START TEST accel_missing_filename 00:05:18.082 ************************************ 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.082 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:18.082 12:44:36 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:18.082 [2024-07-15 12:44:36.119029] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:18.082 [2024-07-15 12:44:36.119106] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278465 ] 00:05:18.082 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.082 [2024-07-15 12:44:36.179523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.082 [2024-07-15 12:44:36.284828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.341 [2024-07-15 12:44:36.342860] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.341 [2024-07-15 12:44:36.426780] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:18.341 A filename is required. 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.341 00:05:18.341 real 0m0.439s 00:05:18.341 user 0m0.335s 00:05:18.341 sys 0m0.139s 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.341 12:44:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:18.341 ************************************ 00:05:18.341 END TEST accel_missing_filename 00:05:18.341 ************************************ 00:05:18.599 12:44:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.599 12:44:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.599 12:44:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:18.599 12:44:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.599 12:44:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.599 ************************************ 00:05:18.599 START TEST accel_compress_verify 00:05:18.599 ************************************ 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.599 12:44:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:18.599 12:44:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:18.599 [2024-07-15 12:44:36.604094] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:18.599 [2024-07-15 12:44:36.604155] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278590 ] 00:05:18.599 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.599 [2024-07-15 12:44:36.661452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.599 [2024-07-15 12:44:36.767745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.857 [2024-07-15 12:44:36.821441] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.857 [2024-07-15 12:44:36.903980] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:18.857 00:05:18.857 Compression does not support the verify option, aborting. 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.857 00:05:18.857 real 0m0.431s 00:05:18.857 user 0m0.341s 00:05:18.857 sys 0m0.123s 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.857 12:44:37 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:18.857 ************************************ 00:05:18.857 END TEST accel_compress_verify 00:05:18.857 ************************************ 00:05:18.857 12:44:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.857 12:44:37 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:18.857 12:44:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.857 12:44:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.857 12:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.857 ************************************ 00:05:18.857 START TEST accel_wrong_workload 00:05:18.857 ************************************ 00:05:18.857 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:18.857 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:18.857 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:18.857 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:19.115 12:44:37 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:19.115 Unsupported workload type: foobar 00:05:19.115 [2024-07-15 12:44:37.081591] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:19.115 accel_perf options: 00:05:19.115 [-h help message] 00:05:19.115 [-q queue depth per core] 00:05:19.115 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.115 [-T number of threads per core 00:05:19.115 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.115 [-t time in seconds] 00:05:19.115 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.115 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:19.115 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.115 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.115 [-S for crc32c workload, use this seed value (default 0) 00:05:19.115 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.115 [-f for fill workload, use this BYTE value (default 255) 00:05:19.115 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.115 [-y verify result if this switch is on] 00:05:19.115 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.115 Can be used to spread operations across a wider range of memory. 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.115 00:05:19.115 real 0m0.024s 00:05:19.115 user 0m0.014s 00:05:19.115 sys 0m0.010s 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.115 12:44:37 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:19.115 ************************************ 00:05:19.115 END TEST accel_wrong_workload 00:05:19.115 ************************************ 00:05:19.115 Error: writing output failed: Broken pipe 00:05:19.115 12:44:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.115 12:44:37 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.115 12:44:37 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:19.115 12:44:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.115 12:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.115 ************************************ 00:05:19.115 START TEST accel_negative_buffers 00:05:19.115 ************************************ 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.115 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:19.115 12:44:37 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:19.115 -x option must be non-negative. 00:05:19.115 [2024-07-15 12:44:37.154528] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:19.115 accel_perf options: 00:05:19.115 [-h help message] 00:05:19.115 [-q queue depth per core] 00:05:19.115 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:19.115 [-T number of threads per core 00:05:19.116 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:19.116 [-t time in seconds] 00:05:19.116 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:19.116 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:19.116 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:19.116 [-l for compress/decompress workloads, name of uncompressed input file 00:05:19.116 [-S for crc32c workload, use this seed value (default 0) 00:05:19.116 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:19.116 [-f for fill workload, use this BYTE value (default 255) 00:05:19.116 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:19.116 [-y verify result if this switch is on] 00:05:19.116 [-a tasks to allocate per core (default: same value as -q)] 00:05:19.116 Can be used to spread operations across a wider range of memory. 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.116 00:05:19.116 real 0m0.024s 00:05:19.116 user 0m0.011s 00:05:19.116 sys 0m0.013s 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.116 12:44:37 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:19.116 ************************************ 00:05:19.116 END TEST accel_negative_buffers 00:05:19.116 ************************************ 00:05:19.116 Error: writing output failed: Broken pipe 00:05:19.116 12:44:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.116 12:44:37 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:19.116 12:44:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:19.116 12:44:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.116 12:44:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.116 ************************************ 00:05:19.116 START TEST accel_crc32c 00:05:19.116 ************************************ 00:05:19.116 12:44:37 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:19.116 12:44:37 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:19.116 [2024-07-15 12:44:37.221371] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:19.116 [2024-07-15 12:44:37.221432] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278668 ] 00:05:19.116 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.116 [2024-07-15 12:44:37.278173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.374 [2024-07-15 12:44:37.384180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:19.374 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:19.375 12:44:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:20.754 12:44:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.754 00:05:20.754 real 0m1.436s 00:05:20.754 user 0m1.304s 00:05:20.754 sys 0m0.135s 00:05:20.754 12:44:38 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.754 12:44:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 ************************************ 00:05:20.754 END TEST accel_crc32c 00:05:20.754 ************************************ 00:05:20.754 12:44:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:20.754 12:44:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:20.754 12:44:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:20.754 12:44:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.754 12:44:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.754 ************************************ 00:05:20.754 START TEST accel_crc32c_C2 00:05:20.754 ************************************ 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:20.754 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:20.755 [2024-07-15 12:44:38.703735] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:20.755 [2024-07-15 12:44:38.703821] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3278933 ] 00:05:20.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.755 [2024-07-15 12:44:38.762761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.755 [2024-07-15 12:44:38.867024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:20.755 12:44:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.132 00:05:22.132 real 0m1.441s 00:05:22.132 user 0m1.308s 00:05:22.132 sys 0m0.135s 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.132 12:44:40 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:22.132 ************************************ 00:05:22.132 END TEST accel_crc32c_C2 00:05:22.132 ************************************ 00:05:22.132 12:44:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:22.132 12:44:40 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:22.132 12:44:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:22.132 12:44:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.132 12:44:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.132 ************************************ 00:05:22.132 START TEST accel_copy 00:05:22.132 ************************************ 00:05:22.132 12:44:40 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:22.132 12:44:40 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:22.132 [2024-07-15 12:44:40.190238] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:22.133 [2024-07-15 12:44:40.190294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279092 ] 00:05:22.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.133 [2024-07-15 12:44:40.247817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.393 [2024-07-15 12:44:40.353463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:22.393 12:44:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:23.773 12:44:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.773 00:05:23.773 real 0m1.436s 00:05:23.773 user 0m1.296s 00:05:23.773 sys 0m0.141s 00:05:23.773 12:44:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.774 12:44:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 ************************************ 00:05:23.774 END TEST accel_copy 00:05:23.774 ************************************ 00:05:23.774 12:44:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.774 12:44:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.774 12:44:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:23.774 12:44:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.774 12:44:41 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.774 ************************************ 00:05:23.774 START TEST accel_fill 00:05:23.774 ************************************ 00:05:23.774 12:44:41 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:23.774 [2024-07-15 12:44:41.676240] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:23.774 [2024-07-15 12:44:41.676302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279253 ] 00:05:23.774 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.774 [2024-07-15 12:44:41.734751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.774 [2024-07-15 12:44:41.851398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:23.774 12:44:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:25.154 12:44:43 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.154 00:05:25.154 real 0m1.451s 00:05:25.154 user 0m1.316s 00:05:25.154 sys 0m0.136s 00:05:25.154 12:44:43 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.154 12:44:43 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:25.154 ************************************ 00:05:25.154 END TEST accel_fill 00:05:25.154 ************************************ 00:05:25.154 12:44:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.154 12:44:43 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:25.154 12:44:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:25.154 12:44:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.154 12:44:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.154 ************************************ 00:05:25.154 START TEST accel_copy_crc32c 00:05:25.154 ************************************ 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:25.154 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:25.154 [2024-07-15 12:44:43.174572] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:25.154 [2024-07-15 12:44:43.174635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279521 ] 00:05:25.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.154 [2024-07-15 12:44:43.232092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.154 [2024-07-15 12:44:43.335836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.413 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.414 12:44:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.796 00:05:26.796 real 0m1.438s 00:05:26.796 user 0m1.311s 00:05:26.796 sys 0m0.129s 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.796 12:44:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:26.796 ************************************ 00:05:26.796 END TEST accel_copy_crc32c 00:05:26.796 ************************************ 00:05:26.796 12:44:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:26.796 12:44:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.796 12:44:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:26.796 12:44:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.796 12:44:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:26.796 ************************************ 00:05:26.796 START TEST accel_copy_crc32c_C2 00:05:26.796 ************************************ 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:26.796 [2024-07-15 12:44:44.659016] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:26.796 [2024-07-15 12:44:44.659093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279680 ] 00:05:26.796 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.796 [2024-07-15 12:44:44.718406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.796 [2024-07-15 12:44:44.825231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.796 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.797 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:26.797 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:26.797 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:26.797 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:26.797 12:44:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.174 00:05:28.174 real 0m1.431s 00:05:28.174 user 0m1.302s 00:05:28.174 sys 0m0.131s 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.174 12:44:46 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 END TEST accel_copy_crc32c_C2 00:05:28.174 ************************************ 00:05:28.174 12:44:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.174 12:44:46 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:28.174 12:44:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.174 12:44:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.174 12:44:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 START TEST accel_dualcast 00:05:28.174 ************************************ 00:05:28.174 12:44:46 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:28.174 [2024-07-15 12:44:46.142905] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:28.174 [2024-07-15 12:44:46.142966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279841 ] 00:05:28.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.174 [2024-07-15 12:44:46.202398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.174 [2024-07-15 12:44:46.307695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.174 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:28.175 12:44:46 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:29.554 12:44:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.554 00:05:29.554 real 0m1.431s 00:05:29.554 user 0m1.301s 00:05:29.554 sys 0m0.132s 00:05:29.554 12:44:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.554 12:44:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:29.554 ************************************ 00:05:29.554 END TEST accel_dualcast 00:05:29.554 ************************************ 00:05:29.554 12:44:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.554 12:44:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:29.554 12:44:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:29.554 12:44:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.554 12:44:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.554 ************************************ 00:05:29.554 START TEST accel_compare 00:05:29.554 ************************************ 00:05:29.554 12:44:47 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:29.554 12:44:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:29.554 12:44:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:29.555 12:44:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:29.555 [2024-07-15 12:44:47.625238] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:29.555 [2024-07-15 12:44:47.625301] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280107 ] 00:05:29.555 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.555 [2024-07-15 12:44:47.684306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.815 [2024-07-15 12:44:47.790004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.815 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:29.816 12:44:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:31.197 12:44:49 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.197 00:05:31.197 real 0m1.440s 00:05:31.197 user 0m1.300s 00:05:31.197 sys 0m0.142s 00:05:31.197 12:44:49 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.197 12:44:49 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:31.197 ************************************ 00:05:31.197 END TEST accel_compare 00:05:31.197 ************************************ 00:05:31.197 12:44:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.197 12:44:49 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:31.197 12:44:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.197 12:44:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.197 12:44:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.197 ************************************ 00:05:31.197 START TEST accel_xor 00:05:31.197 ************************************ 00:05:31.197 12:44:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:31.197 [2024-07-15 12:44:49.114029] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:31.197 [2024-07-15 12:44:49.114090] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280266 ] 00:05:31.197 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.197 [2024-07-15 12:44:49.173797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.197 [2024-07-15 12:44:49.284210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.197 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:31.198 12:44:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.580 00:05:32.580 real 0m1.434s 00:05:32.580 user 0m1.313s 00:05:32.580 sys 0m0.122s 00:05:32.580 12:44:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.580 12:44:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:32.580 ************************************ 00:05:32.580 END TEST accel_xor 00:05:32.580 ************************************ 00:05:32.580 12:44:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.580 12:44:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:32.580 12:44:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:32.580 12:44:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.580 12:44:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.580 ************************************ 00:05:32.580 START TEST accel_xor 00:05:32.580 ************************************ 00:05:32.580 12:44:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:32.580 12:44:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:32.580 [2024-07-15 12:44:50.597981] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:32.580 [2024-07-15 12:44:50.598043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280427 ] 00:05:32.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.580 [2024-07-15 12:44:50.654972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.580 [2024-07-15 12:44:50.763616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.840 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:32.841 12:44:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:34.221 12:44:52 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.221 00:05:34.221 real 0m1.440s 00:05:34.221 user 0m1.302s 00:05:34.221 sys 0m0.141s 00:05:34.221 12:44:52 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.221 12:44:52 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:34.221 ************************************ 00:05:34.221 END TEST accel_xor 00:05:34.221 ************************************ 00:05:34.221 12:44:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.221 12:44:52 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:34.221 12:44:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:34.221 12:44:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.221 12:44:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.221 ************************************ 00:05:34.221 START TEST accel_dif_verify 00:05:34.221 ************************************ 00:05:34.221 12:44:52 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:34.221 [2024-07-15 12:44:52.086338] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:34.221 [2024-07-15 12:44:52.086400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280701 ] 00:05:34.221 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.221 [2024-07-15 12:44:52.144006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.221 [2024-07-15 12:44:52.247922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:34.221 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:34.222 12:44:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:35.605 12:44:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.605 00:05:35.605 real 0m1.437s 00:05:35.605 user 0m1.306s 00:05:35.605 sys 0m0.135s 00:05:35.605 12:44:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.605 12:44:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:35.605 ************************************ 00:05:35.605 END TEST accel_dif_verify 00:05:35.605 ************************************ 00:05:35.605 12:44:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.605 12:44:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:35.605 12:44:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:35.605 12:44:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.605 12:44:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.605 ************************************ 00:05:35.605 START TEST accel_dif_generate 00:05:35.605 ************************************ 00:05:35.605 12:44:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:35.605 [2024-07-15 12:44:53.575074] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:35.605 [2024-07-15 12:44:53.575135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280852 ] 00:05:35.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.605 [2024-07-15 12:44:53.632501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.605 [2024-07-15 12:44:53.735490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.605 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:35.606 12:44:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:37.026 12:44:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.026 00:05:37.026 real 0m1.434s 00:05:37.026 user 0m1.311s 00:05:37.026 sys 0m0.126s 00:05:37.026 12:44:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.026 12:44:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:37.026 ************************************ 00:05:37.026 END TEST accel_dif_generate 00:05:37.026 ************************************ 00:05:37.026 12:44:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.026 12:44:55 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:37.026 12:44:55 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:37.026 12:44:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.026 12:44:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.026 ************************************ 00:05:37.026 START TEST accel_dif_generate_copy 00:05:37.026 ************************************ 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:37.026 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:37.026 [2024-07-15 12:44:55.052128] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:37.026 [2024-07-15 12:44:55.052186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281021 ] 00:05:37.026 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.026 [2024-07-15 12:44:55.110479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.026 [2024-07-15 12:44:55.218945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.285 12:44:55 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:38.667 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.668 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:38.668 12:44:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.668 00:05:38.668 real 0m1.439s 00:05:38.668 user 0m1.301s 00:05:38.668 sys 0m0.140s 00:05:38.668 12:44:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.668 12:44:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:38.668 ************************************ 00:05:38.668 END TEST accel_dif_generate_copy 00:05:38.668 ************************************ 00:05:38.668 12:44:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.668 12:44:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:38.668 12:44:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.668 12:44:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:38.668 12:44:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.668 12:44:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.668 ************************************ 00:05:38.668 START TEST accel_comp 00:05:38.668 ************************************ 00:05:38.668 12:44:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:38.668 [2024-07-15 12:44:56.543503] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:38.668 [2024-07-15 12:44:56.543564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281198 ] 00:05:38.668 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.668 [2024-07-15 12:44:56.601822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.668 [2024-07-15 12:44:56.706288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:38.668 12:44:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:40.052 12:44:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.052 00:05:40.052 real 0m1.440s 00:05:40.052 user 0m1.307s 00:05:40.052 sys 0m0.136s 00:05:40.052 12:44:57 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.052 12:44:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:40.052 ************************************ 00:05:40.052 END TEST accel_comp 00:05:40.052 ************************************ 00:05:40.052 12:44:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.052 12:44:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.052 12:44:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:40.052 12:44:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.052 12:44:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.052 ************************************ 00:05:40.052 START TEST accel_decomp 00:05:40.052 ************************************ 00:05:40.052 12:44:58 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.052 12:44:58 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:40.052 12:44:58 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:40.052 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:40.053 [2024-07-15 12:44:58.030854] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:40.053 [2024-07-15 12:44:58.030923] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281449 ] 00:05:40.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.053 [2024-07-15 12:44:58.087918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.053 [2024-07-15 12:44:58.192210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:40.053 12:44:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:41.427 12:44:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.427 00:05:41.427 real 0m1.438s 00:05:41.427 user 0m1.300s 00:05:41.427 sys 0m0.141s 00:05:41.427 12:44:59 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.427 12:44:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:41.427 ************************************ 00:05:41.427 END TEST accel_decomp 00:05:41.427 ************************************ 00:05:41.427 12:44:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.427 12:44:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.427 12:44:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:41.427 12:44:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.427 12:44:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.427 ************************************ 00:05:41.427 START TEST accel_decomp_full 00:05:41.427 ************************************ 00:05:41.427 12:44:59 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.427 12:44:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.428 12:44:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.428 12:44:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:41.428 12:44:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:41.428 [2024-07-15 12:44:59.520637] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:41.428 [2024-07-15 12:44:59.520707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281602 ] 00:05:41.428 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.428 [2024-07-15 12:44:59.580655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.685 [2024-07-15 12:44:59.684501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.685 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:41.686 12:44:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:43.107 12:45:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.107 00:05:43.107 real 0m1.441s 00:05:43.107 user 0m1.316s 00:05:43.107 sys 0m0.127s 00:05:43.107 12:45:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.107 12:45:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:43.107 ************************************ 00:05:43.108 END TEST accel_decomp_full 00:05:43.108 ************************************ 00:05:43.108 12:45:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.108 12:45:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:43.108 12:45:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:43.108 12:45:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.108 12:45:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.108 ************************************ 00:05:43.108 START TEST accel_decomp_mcore 00:05:43.108 ************************************ 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:43.108 12:45:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:43.108 [2024-07-15 12:45:01.012278] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:43.108 [2024-07-15 12:45:01.012345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281831 ] 00:05:43.108 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.108 [2024-07-15 12:45:01.074207] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:43.108 [2024-07-15 12:45:01.190747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.108 [2024-07-15 12:45:01.190816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:43.108 [2024-07-15 12:45:01.190877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.108 [2024-07-15 12:45:01.190881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:43.108 12:45:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.484 00:05:44.484 real 0m1.470s 00:05:44.484 user 0m4.766s 00:05:44.484 sys 0m0.150s 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.484 12:45:02 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:44.484 ************************************ 00:05:44.484 END TEST accel_decomp_mcore 00:05:44.484 ************************************ 00:05:44.484 12:45:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.484 12:45:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.484 12:45:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:44.484 12:45:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.484 12:45:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.484 ************************************ 00:05:44.484 START TEST accel_decomp_full_mcore 00:05:44.484 ************************************ 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:44.484 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:44.484 [2024-07-15 12:45:02.534461] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:44.484 [2024-07-15 12:45:02.534524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282151 ] 00:05:44.484 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.484 [2024-07-15 12:45:02.593876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.744 [2024-07-15 12:45:02.707122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.744 [2024-07-15 12:45:02.707186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.744 [2024-07-15 12:45:02.707296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.744 [2024-07-15 12:45:02.707299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.744 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:44.745 12:45:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.125 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.126 00:05:46.126 real 0m1.480s 00:05:46.126 user 0m4.811s 00:05:46.126 sys 0m0.163s 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.126 12:45:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:46.126 ************************************ 00:05:46.126 END TEST accel_decomp_full_mcore 00:05:46.126 ************************************ 00:05:46.126 12:45:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.126 12:45:04 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:46.126 12:45:04 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:46.126 12:45:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.126 12:45:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.126 ************************************ 00:05:46.126 START TEST accel_decomp_mthread 00:05:46.126 ************************************ 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:46.126 [2024-07-15 12:45:04.066416] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:46.126 [2024-07-15 12:45:04.066480] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282313 ] 00:05:46.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.126 [2024-07-15 12:45:04.125643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.126 [2024-07-15 12:45:04.241505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:46.126 12:45:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.507 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.508 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.508 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.508 12:45:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.508 00:05:47.508 real 0m1.450s 00:05:47.508 user 0m1.315s 00:05:47.508 sys 0m0.138s 00:05:47.508 12:45:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.508 12:45:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:47.508 ************************************ 00:05:47.508 END TEST accel_decomp_mthread 00:05:47.508 ************************************ 00:05:47.508 12:45:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.508 12:45:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.508 12:45:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:47.508 12:45:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.508 12:45:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.508 ************************************ 00:05:47.508 START TEST accel_decomp_full_mthread 00:05:47.508 ************************************ 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:47.508 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:47.508 [2024-07-15 12:45:05.562526] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:47.508 [2024-07-15 12:45:05.562589] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282473 ] 00:05:47.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.508 [2024-07-15 12:45:05.622234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.767 [2024-07-15 12:45:05.730695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:47.767 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:47.768 12:45:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.149 00:05:49.149 real 0m1.481s 00:05:49.149 user 0m1.341s 00:05:49.149 sys 0m0.142s 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.149 12:45:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:49.149 ************************************ 00:05:49.149 END TEST accel_decomp_full_mthread 00:05:49.149 ************************************ 00:05:49.149 12:45:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.149 12:45:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:49.149 12:45:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:49.149 12:45:07 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:49.149 12:45:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:49.149 12:45:07 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.149 12:45:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.149 12:45:07 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.149 12:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.149 12:45:07 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.149 12:45:07 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.149 12:45:07 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.149 12:45:07 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:49.149 12:45:07 accel -- accel/accel.sh@41 -- # jq -r . 00:05:49.149 ************************************ 00:05:49.149 START TEST accel_dif_functional_tests 00:05:49.149 ************************************ 00:05:49.149 12:45:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:49.149 [2024-07-15 12:45:07.111050] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:49.149 [2024-07-15 12:45:07.111129] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3282750 ] 00:05:49.149 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.149 [2024-07-15 12:45:07.168704] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.149 [2024-07-15 12:45:07.279592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.149 [2024-07-15 12:45:07.279664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.149 [2024-07-15 12:45:07.279667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.407 00:05:49.407 00:05:49.407 CUnit - A unit testing framework for C - Version 2.1-3 00:05:49.407 http://cunit.sourceforge.net/ 00:05:49.407 00:05:49.407 00:05:49.407 Suite: accel_dif 00:05:49.407 Test: verify: DIF generated, GUARD check ...passed 00:05:49.407 Test: verify: DIF generated, APPTAG check ...passed 00:05:49.407 Test: verify: DIF generated, REFTAG check ...passed 00:05:49.407 Test: verify: DIF not generated, GUARD check ...[2024-07-15 12:45:07.369711] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:49.407 passed 00:05:49.407 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 12:45:07.369801] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:49.407 passed 00:05:49.407 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 12:45:07.369852] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:49.407 passed 00:05:49.407 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:49.407 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 12:45:07.369931] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:49.407 passed 00:05:49.407 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:49.407 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:49.407 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:49.408 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 12:45:07.370095] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:49.408 passed 00:05:49.408 Test: verify copy: DIF generated, GUARD check ...passed 00:05:49.408 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:49.408 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:49.408 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 12:45:07.370253] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:49.408 passed 00:05:49.408 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 12:45:07.370290] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:49.408 passed 00:05:49.408 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 12:45:07.370323] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:49.408 passed 00:05:49.408 Test: generate copy: DIF generated, GUARD check ...passed 00:05:49.408 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:49.408 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:49.408 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:49.408 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:49.408 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:49.408 Test: generate copy: iovecs-len validate ...[2024-07-15 12:45:07.370551] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:49.408 passed 00:05:49.408 Test: generate copy: buffer alignment validate ...passed 00:05:49.408 00:05:49.408 Run Summary: Type Total Ran Passed Failed Inactive 00:05:49.408 suites 1 1 n/a 0 0 00:05:49.408 tests 26 26 26 0 0 00:05:49.408 asserts 115 115 115 0 n/a 00:05:49.408 00:05:49.408 Elapsed time = 0.003 seconds 00:05:49.408 00:05:49.408 real 0m0.537s 00:05:49.408 user 0m0.793s 00:05:49.408 sys 0m0.177s 00:05:49.408 12:45:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.408 12:45:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:49.408 ************************************ 00:05:49.408 END TEST accel_dif_functional_tests 00:05:49.408 ************************************ 00:05:49.666 12:45:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.666 00:05:49.666 real 0m32.646s 00:05:49.666 user 0m36.176s 00:05:49.666 sys 0m4.429s 00:05:49.666 12:45:07 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.666 12:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.666 ************************************ 00:05:49.666 END TEST accel 00:05:49.666 ************************************ 00:05:49.666 12:45:07 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.666 12:45:07 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:49.666 12:45:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.666 12:45:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.666 12:45:07 -- common/autotest_common.sh@10 -- # set +x 00:05:49.666 ************************************ 00:05:49.666 START TEST accel_rpc 00:05:49.666 ************************************ 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:49.666 * Looking for test storage... 00:05:49.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:49.666 12:45:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:49.666 12:45:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3283049 00:05:49.666 12:45:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3283049 00:05:49.666 12:45:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3283049 ']' 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.666 12:45:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.666 [2024-07-15 12:45:07.768142] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:49.666 [2024-07-15 12:45:07.768239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283049 ] 00:05:49.666 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.666 [2024-07-15 12:45:07.828902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.926 [2024-07-15 12:45:07.948374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.926 12:45:07 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.926 12:45:07 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:49.926 12:45:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:49.926 12:45:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:49.926 12:45:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:49.926 12:45:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:49.926 12:45:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:49.926 12:45:07 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.926 12:45:07 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.926 12:45:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.926 ************************************ 00:05:49.926 START TEST accel_assign_opcode 00:05:49.926 ************************************ 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.926 [2024-07-15 12:45:08.008940] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:49.926 [2024-07-15 12:45:08.016949] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:49.926 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:49.927 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:49.927 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:49.927 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.187 software 00:05:50.187 00:05:50.187 real 0m0.300s 00:05:50.187 user 0m0.042s 00:05:50.187 sys 0m0.004s 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.187 12:45:08 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:50.187 ************************************ 00:05:50.187 END TEST accel_assign_opcode 00:05:50.187 ************************************ 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.187 12:45:08 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3283049 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3283049 ']' 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3283049 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283049 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283049' 00:05:50.187 killing process with pid 3283049 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@967 -- # kill 3283049 00:05:50.187 12:45:08 accel_rpc -- common/autotest_common.sh@972 -- # wait 3283049 00:05:50.755 00:05:50.755 real 0m1.107s 00:05:50.755 user 0m1.047s 00:05:50.755 sys 0m0.423s 00:05:50.755 12:45:08 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.755 12:45:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.755 ************************************ 00:05:50.755 END TEST accel_rpc 00:05:50.755 ************************************ 00:05:50.755 12:45:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.755 12:45:08 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:50.755 12:45:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.755 12:45:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.755 12:45:08 -- common/autotest_common.sh@10 -- # set +x 00:05:50.755 ************************************ 00:05:50.755 START TEST app_cmdline 00:05:50.755 ************************************ 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:50.755 * Looking for test storage... 00:05:50.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:50.755 12:45:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:50.755 12:45:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3283487 00:05:50.755 12:45:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:50.755 12:45:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3283487 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3283487 ']' 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.755 12:45:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:50.755 [2024-07-15 12:45:08.927762] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:05:50.755 [2024-07-15 12:45:08.927863] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283487 ] 00:05:50.755 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.015 [2024-07-15 12:45:08.993761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.015 [2024-07-15 12:45:09.114625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.273 12:45:09 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.273 12:45:09 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:51.273 12:45:09 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:51.531 { 00:05:51.531 "version": "SPDK v24.09-pre git sha1 6151edad3", 00:05:51.531 "fields": { 00:05:51.531 "major": 24, 00:05:51.531 "minor": 9, 00:05:51.531 "patch": 0, 00:05:51.531 "suffix": "-pre", 00:05:51.531 "commit": "6151edad3" 00:05:51.531 } 00:05:51.531 } 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:51.531 12:45:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:51.531 12:45:09 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:51.788 request: 00:05:51.788 { 00:05:51.788 "method": "env_dpdk_get_mem_stats", 00:05:51.788 "req_id": 1 00:05:51.788 } 00:05:51.788 Got JSON-RPC error response 00:05:51.788 response: 00:05:51.788 { 00:05:51.788 "code": -32601, 00:05:51.788 "message": "Method not found" 00:05:51.788 } 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:51.788 12:45:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3283487 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3283487 ']' 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3283487 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283487 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283487' 00:05:51.788 killing process with pid 3283487 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@967 -- # kill 3283487 00:05:51.788 12:45:09 app_cmdline -- common/autotest_common.sh@972 -- # wait 3283487 00:05:52.355 00:05:52.355 real 0m1.507s 00:05:52.355 user 0m1.805s 00:05:52.355 sys 0m0.480s 00:05:52.355 12:45:10 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.355 12:45:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:52.355 ************************************ 00:05:52.355 END TEST app_cmdline 00:05:52.355 ************************************ 00:05:52.355 12:45:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.355 12:45:10 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:52.355 12:45:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.355 12:45:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.355 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.355 ************************************ 00:05:52.355 START TEST version 00:05:52.355 ************************************ 00:05:52.355 12:45:10 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:52.355 * Looking for test storage... 00:05:52.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:52.355 12:45:10 version -- app/version.sh@17 -- # get_header_version major 00:05:52.355 12:45:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # cut -f2 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.355 12:45:10 version -- app/version.sh@17 -- # major=24 00:05:52.355 12:45:10 version -- app/version.sh@18 -- # get_header_version minor 00:05:52.355 12:45:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # cut -f2 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.355 12:45:10 version -- app/version.sh@18 -- # minor=9 00:05:52.355 12:45:10 version -- app/version.sh@19 -- # get_header_version patch 00:05:52.355 12:45:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # cut -f2 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.355 12:45:10 version -- app/version.sh@19 -- # patch=0 00:05:52.355 12:45:10 version -- app/version.sh@20 -- # get_header_version suffix 00:05:52.355 12:45:10 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # tr -d '"' 00:05:52.355 12:45:10 version -- app/version.sh@14 -- # cut -f2 00:05:52.355 12:45:10 version -- app/version.sh@20 -- # suffix=-pre 00:05:52.355 12:45:10 version -- app/version.sh@22 -- # version=24.9 00:05:52.355 12:45:10 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:52.355 12:45:10 version -- app/version.sh@28 -- # version=24.9rc0 00:05:52.355 12:45:10 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:52.355 12:45:10 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:52.355 12:45:10 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:52.355 12:45:10 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:52.355 00:05:52.355 real 0m0.112s 00:05:52.355 user 0m0.055s 00:05:52.355 sys 0m0.079s 00:05:52.355 12:45:10 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.355 12:45:10 version -- common/autotest_common.sh@10 -- # set +x 00:05:52.355 ************************************ 00:05:52.355 END TEST version 00:05:52.355 ************************************ 00:05:52.355 12:45:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.355 12:45:10 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@198 -- # uname -s 00:05:52.355 12:45:10 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:52.355 12:45:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:52.355 12:45:10 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:52.355 12:45:10 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:52.355 12:45:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.355 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.355 12:45:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:52.355 12:45:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:52.355 12:45:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:52.355 12:45:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:52.355 12:45:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.355 12:45:10 -- common/autotest_common.sh@10 -- # set +x 00:05:52.613 ************************************ 00:05:52.613 START TEST nvmf_tcp 00:05:52.613 ************************************ 00:05:52.613 12:45:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:52.613 * Looking for test storage... 00:05:52.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.613 12:45:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.613 12:45:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.613 12:45:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.613 12:45:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.613 12:45:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.613 12:45:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:52.614 12:45:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:52.614 12:45:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.614 12:45:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:52.614 12:45:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:52.614 12:45:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:52.614 12:45:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.614 12:45:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.614 ************************************ 00:05:52.614 START TEST nvmf_example 00:05:52.614 ************************************ 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:52.614 * Looking for test storage... 00:05:52.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:52.614 12:45:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:05:55.149 Found 0000:84:00.0 (0x8086 - 0x159b) 00:05:55.149 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:05:55.150 Found 0000:84:00.1 (0x8086 - 0x159b) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:05:55.150 Found net devices under 0000:84:00.0: cvl_0_0 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:05:55.150 Found net devices under 0000:84:00.1: cvl_0_1 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:55.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:55.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:05:55.150 00:05:55.150 --- 10.0.0.2 ping statistics --- 00:05:55.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.150 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:55.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:55.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:05:55.150 00:05:55.150 --- 10.0.0.1 ping statistics --- 00:05:55.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.150 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3285567 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3285567 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3285567 ']' 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.150 12:45:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:55.150 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.084 12:45:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:56.084 12:45:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:56.084 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.304 Initializing NVMe Controllers 00:06:08.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:08.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:08.304 Initialization complete. Launching workers. 00:06:08.304 ======================================================== 00:06:08.304 Latency(us) 00:06:08.304 Device Information : IOPS MiB/s Average min max 00:06:08.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15125.62 59.08 4230.85 860.20 15332.34 00:06:08.304 ======================================================== 00:06:08.304 Total : 15125.62 59.08 4230.85 860.20 15332.34 00:06:08.304 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:08.304 rmmod nvme_tcp 00:06:08.304 rmmod nvme_fabrics 00:06:08.304 rmmod nvme_keyring 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3285567 ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3285567 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3285567 ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3285567 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3285567 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3285567' 00:06:08.304 killing process with pid 3285567 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 3285567 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 3285567 00:06:08.304 nvmf threads initialize successfully 00:06:08.304 bdev subsystem init successfully 00:06:08.304 created a nvmf target service 00:06:08.304 create targets's poll groups done 00:06:08.304 all subsystems of target started 00:06:08.304 nvmf target is running 00:06:08.304 all subsystems of target stopped 00:06:08.304 destroy targets's poll groups done 00:06:08.304 destroyed the nvmf target service 00:06:08.304 bdev subsystem finish successfully 00:06:08.304 nvmf threads destroy successfully 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:08.304 12:45:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.903 00:06:08.903 real 0m16.159s 00:06:08.903 user 0m45.530s 00:06:08.903 sys 0m3.629s 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.903 12:45:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.903 ************************************ 00:06:08.903 END TEST nvmf_example 00:06:08.903 ************************************ 00:06:08.903 12:45:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:08.903 12:45:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:08.903 12:45:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:08.903 12:45:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.903 12:45:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:08.903 ************************************ 00:06:08.903 START TEST nvmf_filesystem 00:06:08.903 ************************************ 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:08.903 * Looking for test storage... 00:06:08.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:08.903 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:08.903 #define SPDK_CONFIG_H 00:06:08.903 #define SPDK_CONFIG_APPS 1 00:06:08.903 #define SPDK_CONFIG_ARCH native 00:06:08.903 #undef SPDK_CONFIG_ASAN 00:06:08.903 #undef SPDK_CONFIG_AVAHI 00:06:08.903 #undef SPDK_CONFIG_CET 00:06:08.903 #define SPDK_CONFIG_COVERAGE 1 00:06:08.903 #define SPDK_CONFIG_CROSS_PREFIX 00:06:08.903 #undef SPDK_CONFIG_CRYPTO 00:06:08.903 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:08.903 #undef SPDK_CONFIG_CUSTOMOCF 00:06:08.903 #undef SPDK_CONFIG_DAOS 00:06:08.903 #define SPDK_CONFIG_DAOS_DIR 00:06:08.903 #define SPDK_CONFIG_DEBUG 1 00:06:08.903 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:08.903 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:08.903 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:08.903 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:08.903 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:08.903 #undef SPDK_CONFIG_DPDK_UADK 00:06:08.903 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:08.903 #define SPDK_CONFIG_EXAMPLES 1 00:06:08.903 #undef SPDK_CONFIG_FC 00:06:08.903 #define SPDK_CONFIG_FC_PATH 00:06:08.903 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:08.903 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:08.903 #undef SPDK_CONFIG_FUSE 00:06:08.903 #undef SPDK_CONFIG_FUZZER 00:06:08.903 #define SPDK_CONFIG_FUZZER_LIB 00:06:08.903 #undef SPDK_CONFIG_GOLANG 00:06:08.903 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:08.903 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:08.903 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:08.903 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:08.903 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:08.903 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:08.903 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:08.903 #define SPDK_CONFIG_IDXD 1 00:06:08.903 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:08.903 #undef SPDK_CONFIG_IPSEC_MB 00:06:08.903 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:08.903 #define SPDK_CONFIG_ISAL 1 00:06:08.903 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:08.903 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:08.903 #define SPDK_CONFIG_LIBDIR 00:06:08.903 #undef SPDK_CONFIG_LTO 00:06:08.903 #define SPDK_CONFIG_MAX_LCORES 128 00:06:08.903 #define SPDK_CONFIG_NVME_CUSE 1 00:06:08.903 #undef SPDK_CONFIG_OCF 00:06:08.903 #define SPDK_CONFIG_OCF_PATH 00:06:08.903 #define SPDK_CONFIG_OPENSSL_PATH 00:06:08.903 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:08.903 #define SPDK_CONFIG_PGO_DIR 00:06:08.903 #undef SPDK_CONFIG_PGO_USE 00:06:08.903 #define SPDK_CONFIG_PREFIX /usr/local 00:06:08.903 #undef SPDK_CONFIG_RAID5F 00:06:08.903 #undef SPDK_CONFIG_RBD 00:06:08.903 #define SPDK_CONFIG_RDMA 1 00:06:08.903 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:08.903 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:08.903 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:08.903 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:08.903 #define SPDK_CONFIG_SHARED 1 00:06:08.903 #undef SPDK_CONFIG_SMA 00:06:08.903 #define SPDK_CONFIG_TESTS 1 00:06:08.903 #undef SPDK_CONFIG_TSAN 00:06:08.903 #define SPDK_CONFIG_UBLK 1 00:06:08.903 #define SPDK_CONFIG_UBSAN 1 00:06:08.903 #undef SPDK_CONFIG_UNIT_TESTS 00:06:08.903 #undef SPDK_CONFIG_URING 00:06:08.904 #define SPDK_CONFIG_URING_PATH 00:06:08.904 #undef SPDK_CONFIG_URING_ZNS 00:06:08.904 #undef SPDK_CONFIG_USDT 00:06:08.904 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:08.904 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:08.904 #define SPDK_CONFIG_VFIO_USER 1 00:06:08.904 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:08.904 #define SPDK_CONFIG_VHOST 1 00:06:08.904 #define SPDK_CONFIG_VIRTIO 1 00:06:08.904 #undef SPDK_CONFIG_VTUNE 00:06:08.904 #define SPDK_CONFIG_VTUNE_DIR 00:06:08.904 #define SPDK_CONFIG_WERROR 1 00:06:08.904 #define SPDK_CONFIG_WPDK_DIR 00:06:08.904 #undef SPDK_CONFIG_XNVME 00:06:08.904 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:08.904 12:45:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:08.904 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3287275 ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3287275 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.UbclDz 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.UbclDz/tests/target /tmp/spdk.UbclDz 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=38911430656 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6171881472 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22538280960 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541025280 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=630784 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:08.905 * Looking for test storage... 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=38911430656 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8386473984 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:08.905 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:08.906 12:45:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:11.436 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:11.436 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:11.436 Found net devices under 0000:84:00.0: cvl_0_0 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:11.436 Found net devices under 0000:84:00.1: cvl_0_1 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:11.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:11.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:06:11.436 00:06:11.436 --- 10.0.0.2 ping statistics --- 00:06:11.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.436 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:11.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:11.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:06:11.436 00:06:11.436 --- 10.0.0.1 ping statistics --- 00:06:11.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:11.436 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:11.436 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:11.437 ************************************ 00:06:11.437 START TEST nvmf_filesystem_no_in_capsule 00:06:11.437 ************************************ 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3288922 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3288922 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3288922 ']' 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.437 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.437 [2024-07-15 12:45:29.386557] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:06:11.437 [2024-07-15 12:45:29.386654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:11.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.437 [2024-07-15 12:45:29.451585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.437 [2024-07-15 12:45:29.562842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:11.437 [2024-07-15 12:45:29.562904] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:11.437 [2024-07-15 12:45:29.562933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:11.437 [2024-07-15 12:45:29.562945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:11.437 [2024-07-15 12:45:29.562955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:11.437 [2024-07-15 12:45:29.563007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.437 [2024-07-15 12:45:29.563069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.437 [2024-07-15 12:45:29.563123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.437 [2024-07-15 12:45:29.563126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 [2024-07-15 12:45:29.716411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 Malloc1 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.696 [2024-07-15 12:45:29.898360] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:11.696 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:11.956 { 00:06:11.956 "name": "Malloc1", 00:06:11.956 "aliases": [ 00:06:11.956 "75c87d93-dca1-4695-a27a-ec49a978fd0b" 00:06:11.956 ], 00:06:11.956 "product_name": "Malloc disk", 00:06:11.956 "block_size": 512, 00:06:11.956 "num_blocks": 1048576, 00:06:11.956 "uuid": "75c87d93-dca1-4695-a27a-ec49a978fd0b", 00:06:11.956 "assigned_rate_limits": { 00:06:11.956 "rw_ios_per_sec": 0, 00:06:11.956 "rw_mbytes_per_sec": 0, 00:06:11.956 "r_mbytes_per_sec": 0, 00:06:11.956 "w_mbytes_per_sec": 0 00:06:11.956 }, 00:06:11.956 "claimed": true, 00:06:11.956 "claim_type": "exclusive_write", 00:06:11.956 "zoned": false, 00:06:11.956 "supported_io_types": { 00:06:11.956 "read": true, 00:06:11.956 "write": true, 00:06:11.956 "unmap": true, 00:06:11.956 "flush": true, 00:06:11.956 "reset": true, 00:06:11.956 "nvme_admin": false, 00:06:11.956 "nvme_io": false, 00:06:11.956 "nvme_io_md": false, 00:06:11.956 "write_zeroes": true, 00:06:11.956 "zcopy": true, 00:06:11.956 "get_zone_info": false, 00:06:11.956 "zone_management": false, 00:06:11.956 "zone_append": false, 00:06:11.956 "compare": false, 00:06:11.956 "compare_and_write": false, 00:06:11.956 "abort": true, 00:06:11.956 "seek_hole": false, 00:06:11.956 "seek_data": false, 00:06:11.956 "copy": true, 00:06:11.956 "nvme_iov_md": false 00:06:11.956 }, 00:06:11.956 "memory_domains": [ 00:06:11.956 { 00:06:11.956 "dma_device_id": "system", 00:06:11.956 "dma_device_type": 1 00:06:11.956 }, 00:06:11.956 { 00:06:11.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.956 "dma_device_type": 2 00:06:11.956 } 00:06:11.956 ], 00:06:11.956 "driver_specific": {} 00:06:11.956 } 00:06:11.956 ]' 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:11.956 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:11.957 12:45:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:11.957 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:11.957 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:11.957 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:11.957 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:11.957 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:12.527 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:12.527 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:12.527 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:12.527 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:12.527 12:45:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:15.067 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:15.068 12:45:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:15.068 12:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:15.632 12:45:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:16.564 ************************************ 00:06:16.564 START TEST filesystem_ext4 00:06:16.564 ************************************ 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:16.564 12:45:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:16.564 mke2fs 1.46.5 (30-Dec-2021) 00:06:16.564 Discarding device blocks: 0/522240 done 00:06:16.564 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:16.564 Filesystem UUID: ba11f4b9-28e2-4691-a396-09cc06d06010 00:06:16.564 Superblock backups stored on blocks: 00:06:16.564 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:16.564 00:06:16.564 Allocating group tables: 0/64 done 00:06:16.564 Writing inode tables: 0/64 done 00:06:16.829 Creating journal (8192 blocks): done 00:06:17.088 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:06:17.088 00:06:17.088 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:17.088 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:17.346 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3288922 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:17.606 00:06:17.606 real 0m0.981s 00:06:17.606 user 0m0.021s 00:06:17.606 sys 0m0.056s 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 END TEST filesystem_ext4 00:06:17.606 ************************************ 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:17.606 ************************************ 00:06:17.606 START TEST filesystem_btrfs 00:06:17.606 ************************************ 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:17.606 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:17.866 btrfs-progs v6.6.2 00:06:17.866 See https://btrfs.readthedocs.io for more information. 00:06:17.866 00:06:17.866 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:17.866 NOTE: several default settings have changed in version 5.15, please make sure 00:06:17.866 this does not affect your deployments: 00:06:17.866 - DUP for metadata (-m dup) 00:06:17.867 - enabled no-holes (-O no-holes) 00:06:17.867 - enabled free-space-tree (-R free-space-tree) 00:06:17.867 00:06:17.867 Label: (null) 00:06:17.867 UUID: 51fc9ebd-4574-473c-a082-ccaf14a20486 00:06:17.867 Node size: 16384 00:06:17.867 Sector size: 4096 00:06:17.867 Filesystem size: 510.00MiB 00:06:17.867 Block group profiles: 00:06:17.867 Data: single 8.00MiB 00:06:17.867 Metadata: DUP 32.00MiB 00:06:17.867 System: DUP 8.00MiB 00:06:17.867 SSD detected: yes 00:06:17.867 Zoned device: no 00:06:17.867 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:17.867 Runtime features: free-space-tree 00:06:17.867 Checksum: crc32c 00:06:17.867 Number of devices: 1 00:06:17.867 Devices: 00:06:17.867 ID SIZE PATH 00:06:17.867 1 510.00MiB /dev/nvme0n1p1 00:06:17.867 00:06:17.867 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:17.867 12:45:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3288922 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:18.435 00:06:18.435 real 0m0.784s 00:06:18.435 user 0m0.006s 00:06:18.435 sys 0m0.126s 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:18.435 ************************************ 00:06:18.435 END TEST filesystem_btrfs 00:06:18.435 ************************************ 00:06:18.435 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:18.436 ************************************ 00:06:18.436 START TEST filesystem_xfs 00:06:18.436 ************************************ 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:18.436 12:45:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:18.436 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:18.436 = sectsz=512 attr=2, projid32bit=1 00:06:18.436 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:18.436 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:18.436 data = bsize=4096 blocks=130560, imaxpct=25 00:06:18.436 = sunit=0 swidth=0 blks 00:06:18.436 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:18.436 log =internal log bsize=4096 blocks=16384, version=2 00:06:18.436 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:18.436 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:19.372 Discarding blocks...Done. 00:06:19.372 12:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:19.372 12:45:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3288922 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:21.279 00:06:21.279 real 0m2.997s 00:06:21.279 user 0m0.016s 00:06:21.279 sys 0m0.058s 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.279 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:21.279 ************************************ 00:06:21.279 END TEST filesystem_xfs 00:06:21.279 ************************************ 00:06:21.537 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:21.537 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:21.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3288922 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3288922 ']' 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3288922 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3288922 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3288922' 00:06:21.796 killing process with pid 3288922 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3288922 00:06:21.796 12:45:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3288922 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:22.364 00:06:22.364 real 0m11.120s 00:06:22.364 user 0m42.538s 00:06:22.364 sys 0m1.675s 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.364 ************************************ 00:06:22.364 END TEST nvmf_filesystem_no_in_capsule 00:06:22.364 ************************************ 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.364 ************************************ 00:06:22.364 START TEST nvmf_filesystem_in_capsule 00:06:22.364 ************************************ 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3290475 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3290475 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3290475 ']' 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:22.364 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.364 [2024-07-15 12:45:40.567298] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:06:22.364 [2024-07-15 12:45:40.567396] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.624 [2024-07-15 12:45:40.634588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.624 [2024-07-15 12:45:40.738506] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.624 [2024-07-15 12:45:40.738574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.624 [2024-07-15 12:45:40.738601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.624 [2024-07-15 12:45:40.738613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.624 [2024-07-15 12:45:40.738628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.624 [2024-07-15 12:45:40.738708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.624 [2024-07-15 12:45:40.738775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.624 [2024-07-15 12:45:40.738842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.624 [2024-07-15 12:45:40.738845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 [2024-07-15 12:45:40.895671] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 Malloc1 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:22.883 [2024-07-15 12:45:41.077480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.883 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:23.143 { 00:06:23.143 "name": "Malloc1", 00:06:23.143 "aliases": [ 00:06:23.143 "26d6837d-77d2-4383-b2a4-1b3205d22423" 00:06:23.143 ], 00:06:23.143 "product_name": "Malloc disk", 00:06:23.143 "block_size": 512, 00:06:23.143 "num_blocks": 1048576, 00:06:23.143 "uuid": "26d6837d-77d2-4383-b2a4-1b3205d22423", 00:06:23.143 "assigned_rate_limits": { 00:06:23.143 "rw_ios_per_sec": 0, 00:06:23.143 "rw_mbytes_per_sec": 0, 00:06:23.143 "r_mbytes_per_sec": 0, 00:06:23.143 "w_mbytes_per_sec": 0 00:06:23.143 }, 00:06:23.143 "claimed": true, 00:06:23.143 "claim_type": "exclusive_write", 00:06:23.143 "zoned": false, 00:06:23.143 "supported_io_types": { 00:06:23.143 "read": true, 00:06:23.143 "write": true, 00:06:23.143 "unmap": true, 00:06:23.143 "flush": true, 00:06:23.143 "reset": true, 00:06:23.143 "nvme_admin": false, 00:06:23.143 "nvme_io": false, 00:06:23.143 "nvme_io_md": false, 00:06:23.143 "write_zeroes": true, 00:06:23.143 "zcopy": true, 00:06:23.143 "get_zone_info": false, 00:06:23.143 "zone_management": false, 00:06:23.143 "zone_append": false, 00:06:23.143 "compare": false, 00:06:23.143 "compare_and_write": false, 00:06:23.143 "abort": true, 00:06:23.143 "seek_hole": false, 00:06:23.143 "seek_data": false, 00:06:23.143 "copy": true, 00:06:23.143 "nvme_iov_md": false 00:06:23.143 }, 00:06:23.143 "memory_domains": [ 00:06:23.143 { 00:06:23.143 "dma_device_id": "system", 00:06:23.143 "dma_device_type": 1 00:06:23.143 }, 00:06:23.143 { 00:06:23.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.143 "dma_device_type": 2 00:06:23.143 } 00:06:23.143 ], 00:06:23.143 "driver_specific": {} 00:06:23.143 } 00:06:23.143 ]' 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:23.143 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:23.709 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:23.709 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:23.709 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:23.709 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:23.709 12:45:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:25.612 12:45:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:26.179 12:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:27.116 12:45:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:28.052 12:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:28.052 12:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:28.052 12:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.052 12:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.052 12:45:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.052 ************************************ 00:06:28.052 START TEST filesystem_in_capsule_ext4 00:06:28.052 ************************************ 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:28.052 mke2fs 1.46.5 (30-Dec-2021) 00:06:28.052 Discarding device blocks: 0/522240 done 00:06:28.052 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:28.052 Filesystem UUID: 03d1dade-8c4c-4162-a1a1-7c6f1c5a8cda 00:06:28.052 Superblock backups stored on blocks: 00:06:28.052 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:28.052 00:06:28.052 Allocating group tables: 0/64 done 00:06:28.052 Writing inode tables: 0/64 done 00:06:28.052 Creating journal (8192 blocks): done 00:06:28.052 Writing superblocks and filesystem accounting information: 0/64 done 00:06:28.052 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:28.052 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3290475 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:28.620 00:06:28.620 real 0m0.750s 00:06:28.620 user 0m0.015s 00:06:28.620 sys 0m0.054s 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:28.620 ************************************ 00:06:28.620 END TEST filesystem_in_capsule_ext4 00:06:28.620 ************************************ 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.620 ************************************ 00:06:28.620 START TEST filesystem_in_capsule_btrfs 00:06:28.620 ************************************ 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:28.620 12:45:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:29.187 btrfs-progs v6.6.2 00:06:29.187 See https://btrfs.readthedocs.io for more information. 00:06:29.187 00:06:29.187 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:29.187 NOTE: several default settings have changed in version 5.15, please make sure 00:06:29.187 this does not affect your deployments: 00:06:29.187 - DUP for metadata (-m dup) 00:06:29.187 - enabled no-holes (-O no-holes) 00:06:29.187 - enabled free-space-tree (-R free-space-tree) 00:06:29.187 00:06:29.187 Label: (null) 00:06:29.187 UUID: 762f1f7a-97dd-4b75-a4ca-cd7a68ed5d11 00:06:29.187 Node size: 16384 00:06:29.187 Sector size: 4096 00:06:29.187 Filesystem size: 510.00MiB 00:06:29.187 Block group profiles: 00:06:29.187 Data: single 8.00MiB 00:06:29.187 Metadata: DUP 32.00MiB 00:06:29.187 System: DUP 8.00MiB 00:06:29.187 SSD detected: yes 00:06:29.187 Zoned device: no 00:06:29.187 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:29.187 Runtime features: free-space-tree 00:06:29.187 Checksum: crc32c 00:06:29.187 Number of devices: 1 00:06:29.187 Devices: 00:06:29.187 ID SIZE PATH 00:06:29.187 1 510.00MiB /dev/nvme0n1p1 00:06:29.187 00:06:29.187 12:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:29.187 12:45:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3290475 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.181 00:06:30.181 real 0m1.285s 00:06:30.181 user 0m0.016s 00:06:30.181 sys 0m0.115s 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:30.181 ************************************ 00:06:30.181 END TEST filesystem_in_capsule_btrfs 00:06:30.181 ************************************ 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:30.181 ************************************ 00:06:30.181 START TEST filesystem_in_capsule_xfs 00:06:30.181 ************************************ 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:30.181 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:30.181 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:30.181 = sectsz=512 attr=2, projid32bit=1 00:06:30.181 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:30.181 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:30.181 data = bsize=4096 blocks=130560, imaxpct=25 00:06:30.181 = sunit=0 swidth=0 blks 00:06:30.181 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:30.181 log =internal log bsize=4096 blocks=16384, version=2 00:06:30.181 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:30.181 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:30.748 Discarding blocks...Done. 00:06:30.749 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:30.749 12:45:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3290475 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.282 00:06:33.282 real 0m3.097s 00:06:33.282 user 0m0.014s 00:06:33.282 sys 0m0.059s 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:33.282 ************************************ 00:06:33.282 END TEST filesystem_in_capsule_xfs 00:06:33.282 ************************************ 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.282 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:33.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3290475 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3290475 ']' 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3290475 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3290475 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3290475' 00:06:33.570 killing process with pid 3290475 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3290475 00:06:33.570 12:45:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3290475 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:34.140 00:06:34.140 real 0m11.664s 00:06:34.140 user 0m44.565s 00:06:34.140 sys 0m1.805s 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:34.140 ************************************ 00:06:34.140 END TEST nvmf_filesystem_in_capsule 00:06:34.140 ************************************ 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:34.140 rmmod nvme_tcp 00:06:34.140 rmmod nvme_fabrics 00:06:34.140 rmmod nvme_keyring 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:34.140 12:45:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.682 12:45:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:36.682 00:06:36.682 real 0m27.437s 00:06:36.682 user 1m28.086s 00:06:36.682 sys 0m5.153s 00:06:36.682 12:45:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.682 12:45:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.682 ************************************ 00:06:36.682 END TEST nvmf_filesystem 00:06:36.682 ************************************ 00:06:36.682 12:45:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:36.682 12:45:54 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:36.682 12:45:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.682 12:45:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.682 12:45:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.682 ************************************ 00:06:36.682 START TEST nvmf_target_discovery 00:06:36.682 ************************************ 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:36.682 * Looking for test storage... 00:06:36.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.682 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.683 12:45:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:38.585 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:38.585 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:38.585 Found net devices under 0000:84:00.0: cvl_0_0 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.585 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:38.586 Found net devices under 0000:84:00.1: cvl_0_1 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:06:38.586 00:06:38.586 --- 10.0.0.2 ping statistics --- 00:06:38.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.586 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:38.586 00:06:38.586 --- 10.0.0.1 ping statistics --- 00:06:38.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.586 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3293970 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3293970 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3293970 ']' 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.586 12:45:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.843 [2024-07-15 12:45:56.825099] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:06:38.843 [2024-07-15 12:45:56.825185] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.843 [2024-07-15 12:45:56.891410] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.843 [2024-07-15 12:45:56.996655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:38.843 [2024-07-15 12:45:56.996712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:38.843 [2024-07-15 12:45:56.996753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:38.843 [2024-07-15 12:45:56.996772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:38.843 [2024-07-15 12:45:56.996785] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:38.843 [2024-07-15 12:45:56.996862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.843 [2024-07-15 12:45:56.996934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.843 [2024-07-15 12:45:56.997012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.843 [2024-07-15 12:45:56.997019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 [2024-07-15 12:45:57.150707] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.099 Null1 00:06:39.099 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 [2024-07-15 12:45:57.191058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 Null2 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 Null3 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 Null4 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.100 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:06:39.358 00:06:39.358 Discovery Log Number of Records 6, Generation counter 6 00:06:39.358 =====Discovery Log Entry 0====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: current discovery subsystem 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4420 00:06:39.358 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: explicit discovery connections, duplicate discovery information 00:06:39.358 sectype: none 00:06:39.358 =====Discovery Log Entry 1====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: nvme subsystem 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4420 00:06:39.358 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: none 00:06:39.358 sectype: none 00:06:39.358 =====Discovery Log Entry 2====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: nvme subsystem 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4420 00:06:39.358 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: none 00:06:39.358 sectype: none 00:06:39.358 =====Discovery Log Entry 3====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: nvme subsystem 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4420 00:06:39.358 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: none 00:06:39.358 sectype: none 00:06:39.358 =====Discovery Log Entry 4====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: nvme subsystem 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4420 00:06:39.358 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: none 00:06:39.358 sectype: none 00:06:39.358 =====Discovery Log Entry 5====== 00:06:39.358 trtype: tcp 00:06:39.358 adrfam: ipv4 00:06:39.358 subtype: discovery subsystem referral 00:06:39.358 treq: not required 00:06:39.358 portid: 0 00:06:39.358 trsvcid: 4430 00:06:39.358 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:39.358 traddr: 10.0.0.2 00:06:39.358 eflags: none 00:06:39.358 sectype: none 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:39.358 Perform nvmf subsystem discovery via RPC 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.358 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.358 [ 00:06:39.358 { 00:06:39.358 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:39.358 "subtype": "Discovery", 00:06:39.358 "listen_addresses": [ 00:06:39.358 { 00:06:39.358 "trtype": "TCP", 00:06:39.358 "adrfam": "IPv4", 00:06:39.358 "traddr": "10.0.0.2", 00:06:39.358 "trsvcid": "4420" 00:06:39.358 } 00:06:39.358 ], 00:06:39.358 "allow_any_host": true, 00:06:39.358 "hosts": [] 00:06:39.358 }, 00:06:39.358 { 00:06:39.358 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:39.358 "subtype": "NVMe", 00:06:39.358 "listen_addresses": [ 00:06:39.358 { 00:06:39.358 "trtype": "TCP", 00:06:39.358 "adrfam": "IPv4", 00:06:39.358 "traddr": "10.0.0.2", 00:06:39.358 "trsvcid": "4420" 00:06:39.358 } 00:06:39.358 ], 00:06:39.358 "allow_any_host": true, 00:06:39.358 "hosts": [], 00:06:39.358 "serial_number": "SPDK00000000000001", 00:06:39.358 "model_number": "SPDK bdev Controller", 00:06:39.358 "max_namespaces": 32, 00:06:39.358 "min_cntlid": 1, 00:06:39.358 "max_cntlid": 65519, 00:06:39.358 "namespaces": [ 00:06:39.358 { 00:06:39.358 "nsid": 1, 00:06:39.358 "bdev_name": "Null1", 00:06:39.358 "name": "Null1", 00:06:39.358 "nguid": "6F5D4C2A982F414E9EE61CFB49E56733", 00:06:39.358 "uuid": "6f5d4c2a-982f-414e-9ee6-1cfb49e56733" 00:06:39.358 } 00:06:39.358 ] 00:06:39.358 }, 00:06:39.358 { 00:06:39.358 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:39.358 "subtype": "NVMe", 00:06:39.358 "listen_addresses": [ 00:06:39.358 { 00:06:39.358 "trtype": "TCP", 00:06:39.358 "adrfam": "IPv4", 00:06:39.358 "traddr": "10.0.0.2", 00:06:39.358 "trsvcid": "4420" 00:06:39.358 } 00:06:39.358 ], 00:06:39.358 "allow_any_host": true, 00:06:39.358 "hosts": [], 00:06:39.358 "serial_number": "SPDK00000000000002", 00:06:39.358 "model_number": "SPDK bdev Controller", 00:06:39.358 "max_namespaces": 32, 00:06:39.358 "min_cntlid": 1, 00:06:39.358 "max_cntlid": 65519, 00:06:39.358 "namespaces": [ 00:06:39.358 { 00:06:39.358 "nsid": 1, 00:06:39.358 "bdev_name": "Null2", 00:06:39.358 "name": "Null2", 00:06:39.358 "nguid": "E87B013F5C9144D6AFD3F99AE79ECE32", 00:06:39.358 "uuid": "e87b013f-5c91-44d6-afd3-f99ae79ece32" 00:06:39.358 } 00:06:39.358 ] 00:06:39.358 }, 00:06:39.358 { 00:06:39.358 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:39.358 "subtype": "NVMe", 00:06:39.358 "listen_addresses": [ 00:06:39.358 { 00:06:39.358 "trtype": "TCP", 00:06:39.358 "adrfam": "IPv4", 00:06:39.359 "traddr": "10.0.0.2", 00:06:39.359 "trsvcid": "4420" 00:06:39.359 } 00:06:39.359 ], 00:06:39.359 "allow_any_host": true, 00:06:39.359 "hosts": [], 00:06:39.359 "serial_number": "SPDK00000000000003", 00:06:39.359 "model_number": "SPDK bdev Controller", 00:06:39.359 "max_namespaces": 32, 00:06:39.359 "min_cntlid": 1, 00:06:39.359 "max_cntlid": 65519, 00:06:39.359 "namespaces": [ 00:06:39.359 { 00:06:39.359 "nsid": 1, 00:06:39.359 "bdev_name": "Null3", 00:06:39.359 "name": "Null3", 00:06:39.359 "nguid": "B4E8EA1A0F0C46268697DD29B103E7E8", 00:06:39.359 "uuid": "b4e8ea1a-0f0c-4626-8697-dd29b103e7e8" 00:06:39.359 } 00:06:39.359 ] 00:06:39.359 }, 00:06:39.359 { 00:06:39.359 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:39.359 "subtype": "NVMe", 00:06:39.359 "listen_addresses": [ 00:06:39.359 { 00:06:39.359 "trtype": "TCP", 00:06:39.359 "adrfam": "IPv4", 00:06:39.359 "traddr": "10.0.0.2", 00:06:39.359 "trsvcid": "4420" 00:06:39.359 } 00:06:39.359 ], 00:06:39.359 "allow_any_host": true, 00:06:39.359 "hosts": [], 00:06:39.359 "serial_number": "SPDK00000000000004", 00:06:39.359 "model_number": "SPDK bdev Controller", 00:06:39.359 "max_namespaces": 32, 00:06:39.359 "min_cntlid": 1, 00:06:39.359 "max_cntlid": 65519, 00:06:39.359 "namespaces": [ 00:06:39.359 { 00:06:39.359 "nsid": 1, 00:06:39.359 "bdev_name": "Null4", 00:06:39.359 "name": "Null4", 00:06:39.359 "nguid": "C0223067FCB944DF902E30E02A9EB0A6", 00:06:39.359 "uuid": "c0223067-fcb9-44df-902e-30e02a9eb0a6" 00:06:39.359 } 00:06:39.359 ] 00:06:39.359 } 00:06:39.359 ] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:39.359 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:39.359 rmmod nvme_tcp 00:06:39.359 rmmod nvme_fabrics 00:06:39.359 rmmod nvme_keyring 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3293970 ']' 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3293970 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3293970 ']' 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3293970 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3293970 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3293970' 00:06:39.618 killing process with pid 3293970 00:06:39.618 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3293970 00:06:39.619 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3293970 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:39.878 12:45:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.783 12:45:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:41.783 00:06:41.783 real 0m5.549s 00:06:41.783 user 0m4.310s 00:06:41.783 sys 0m1.921s 00:06:41.783 12:45:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.783 12:45:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:41.783 ************************************ 00:06:41.783 END TEST nvmf_target_discovery 00:06:41.783 ************************************ 00:06:41.783 12:45:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:41.783 12:45:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:41.783 12:45:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.783 12:45:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.783 12:45:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.783 ************************************ 00:06:41.783 START TEST nvmf_referrals 00:06:41.783 ************************************ 00:06:41.783 12:45:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:42.039 * Looking for test storage... 00:06:42.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:42.039 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:42.040 12:46:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:44.565 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:44.566 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:44.566 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:44.566 Found net devices under 0000:84:00.0: cvl_0_0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:44.566 Found net devices under 0000:84:00.1: cvl_0_1 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:44.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:06:44.566 00:06:44.566 --- 10.0.0.2 ping statistics --- 00:06:44.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.566 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:06:44.566 00:06:44.566 --- 10.0.0.1 ping statistics --- 00:06:44.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.566 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3296075 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3296075 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3296075 ']' 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.566 [2024-07-15 12:46:02.423564] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:06:44.566 [2024-07-15 12:46:02.423673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.566 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.566 [2024-07-15 12:46:02.489883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:44.566 [2024-07-15 12:46:02.601046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.566 [2024-07-15 12:46:02.601114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.566 [2024-07-15 12:46:02.601134] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.566 [2024-07-15 12:46:02.601151] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.566 [2024-07-15 12:46:02.601166] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.566 [2024-07-15 12:46:02.601256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.566 [2024-07-15 12:46:02.601331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.566 [2024-07-15 12:46:02.601359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.566 [2024-07-15 12:46:02.601372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.566 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 [2024-07-15 12:46:02.758696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.567 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 [2024-07-15 12:46:02.770951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:44.826 12:46:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.826 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.098 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.099 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.375 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.632 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:45.889 12:46:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:45.889 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:46.148 rmmod nvme_tcp 00:06:46.148 rmmod nvme_fabrics 00:06:46.148 rmmod nvme_keyring 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3296075 ']' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3296075 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3296075 ']' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3296075 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3296075 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3296075' 00:06:46.148 killing process with pid 3296075 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3296075 00:06:46.148 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3296075 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.405 12:46:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.932 12:46:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:48.932 00:06:48.932 real 0m6.599s 00:06:48.932 user 0m8.997s 00:06:48.932 sys 0m2.243s 00:06:48.932 12:46:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.932 12:46:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:48.932 ************************************ 00:06:48.932 END TEST nvmf_referrals 00:06:48.932 ************************************ 00:06:48.932 12:46:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:48.932 12:46:06 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:48.932 12:46:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.932 12:46:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.932 12:46:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.932 ************************************ 00:06:48.932 START TEST nvmf_connect_disconnect 00:06:48.932 ************************************ 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:48.932 * Looking for test storage... 00:06:48.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.932 12:46:06 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:06:50.863 Found 0000:84:00.0 (0x8086 - 0x159b) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:06:50.863 Found 0000:84:00.1 (0x8086 - 0x159b) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:06:50.863 Found net devices under 0000:84:00.0: cvl_0_0 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:06:50.863 Found net devices under 0000:84:00.1: cvl_0_1 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:50.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:06:50.863 00:06:50.863 --- 10.0.0.2 ping statistics --- 00:06:50.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.863 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:06:50.863 00:06:50.863 --- 10.0.0.1 ping statistics --- 00:06:50.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.863 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:50.863 12:46:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:50.863 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3298384 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3298384 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3298384 ']' 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.864 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:50.864 [2024-07-15 12:46:09.056895] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:06:50.864 [2024-07-15 12:46:09.056974] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.121 [2024-07-15 12:46:09.119948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.121 [2024-07-15 12:46:09.221937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.121 [2024-07-15 12:46:09.221995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.121 [2024-07-15 12:46:09.222017] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.121 [2024-07-15 12:46:09.222047] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.121 [2024-07-15 12:46:09.222061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.121 [2024-07-15 12:46:09.222153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.121 [2024-07-15 12:46:09.222261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.121 [2024-07-15 12:46:09.222349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.121 [2024-07-15 12:46:09.222342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 [2024-07-15 12:46:09.372422] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:51.380 [2024-07-15 12:46:09.423898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:51.380 12:46:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:54.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:57.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:59.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:02.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.844 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:07:04.844 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:07:04.844 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.844 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:05.104 rmmod nvme_tcp 00:07:05.104 rmmod nvme_fabrics 00:07:05.104 rmmod nvme_keyring 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3298384 ']' 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3298384 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3298384 ']' 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3298384 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3298384 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3298384' 00:07:05.104 killing process with pid 3298384 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3298384 00:07:05.104 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3298384 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:05.364 12:46:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.272 12:46:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.272 00:07:07.272 real 0m18.857s 00:07:07.272 user 0m56.172s 00:07:07.272 sys 0m3.462s 00:07:07.272 12:46:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.272 12:46:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:07.272 ************************************ 00:07:07.272 END TEST nvmf_connect_disconnect 00:07:07.272 ************************************ 00:07:07.530 12:46:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:07.530 12:46:25 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:07.530 12:46:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:07.530 12:46:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.530 12:46:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:07.530 ************************************ 00:07:07.530 START TEST nvmf_multitarget 00:07:07.530 ************************************ 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:07:07.530 * Looking for test storage... 00:07:07.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.530 12:46:25 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.531 12:46:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:10.065 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.065 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:10.066 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:10.066 Found net devices under 0000:84:00.0: cvl_0_0 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:10.066 Found net devices under 0000:84:00.1: cvl_0_1 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:07:10.066 00:07:10.066 --- 10.0.0.2 ping statistics --- 00:07:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.066 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:07:10.066 00:07:10.066 --- 10.0.0.1 ping statistics --- 00:07:10.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.066 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3302165 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3302165 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3302165 ']' 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.066 12:46:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:10.066 [2024-07-15 12:46:27.906043] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:07:10.066 [2024-07-15 12:46:27.906123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.066 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.066 [2024-07-15 12:46:27.970691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.066 [2024-07-15 12:46:28.071236] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.066 [2024-07-15 12:46:28.071293] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.066 [2024-07-15 12:46:28.071321] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.066 [2024-07-15 12:46:28.071333] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.066 [2024-07-15 12:46:28.071342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.066 [2024-07-15 12:46:28.071431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.066 [2024-07-15 12:46:28.071494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.066 [2024-07-15 12:46:28.071555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.066 [2024-07-15 12:46:28.071561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:10.066 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:07:10.324 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:07:10.324 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:07:10.324 "nvmf_tgt_1" 00:07:10.324 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:07:10.581 "nvmf_tgt_2" 00:07:10.582 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:10.582 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:07:10.582 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:07:10.582 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:07:10.582 true 00:07:10.841 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:07:10.841 true 00:07:10.841 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:07:10.841 12:46:28 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:10.841 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:10.841 rmmod nvme_tcp 00:07:11.101 rmmod nvme_fabrics 00:07:11.101 rmmod nvme_keyring 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3302165 ']' 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3302165 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3302165 ']' 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3302165 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3302165 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3302165' 00:07:11.101 killing process with pid 3302165 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3302165 00:07:11.101 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3302165 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.361 12:46:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.266 12:46:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.266 00:07:13.266 real 0m5.914s 00:07:13.266 user 0m6.632s 00:07:13.266 sys 0m2.025s 00:07:13.266 12:46:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.266 12:46:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:07:13.266 ************************************ 00:07:13.266 END TEST nvmf_multitarget 00:07:13.266 ************************************ 00:07:13.266 12:46:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:13.266 12:46:31 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:13.266 12:46:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.266 12:46:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.266 12:46:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.523 ************************************ 00:07:13.523 START TEST nvmf_rpc 00:07:13.523 ************************************ 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:07:13.523 * Looking for test storage... 00:07:13.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:07:13.523 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.524 12:46:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:16.051 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:16.051 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:16.051 Found net devices under 0000:84:00.0: cvl_0_0 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:16.051 Found net devices under 0000:84:00.1: cvl_0_1 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:07:16.051 00:07:16.051 --- 10.0.0.2 ping statistics --- 00:07:16.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.051 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:16.051 00:07:16.051 --- 10.0.0.1 ping statistics --- 00:07:16.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.051 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.051 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3304281 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3304281 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3304281 ']' 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.052 12:46:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 [2024-07-15 12:46:33.981518] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:07:16.052 [2024-07-15 12:46:33.981584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.052 [2024-07-15 12:46:34.043604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:16.052 [2024-07-15 12:46:34.146509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.052 [2024-07-15 12:46:34.146558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.052 [2024-07-15 12:46:34.146585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.052 [2024-07-15 12:46:34.146596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.052 [2024-07-15 12:46:34.146606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.052 [2024-07-15 12:46:34.146684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.052 [2024-07-15 12:46:34.146813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.052 [2024-07-15 12:46:34.146839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.052 [2024-07-15 12:46:34.146842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.987 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:07:16.987 "tick_rate": 2700000000, 00:07:16.987 "poll_groups": [ 00:07:16.987 { 00:07:16.987 "name": "nvmf_tgt_poll_group_000", 00:07:16.987 "admin_qpairs": 0, 00:07:16.987 "io_qpairs": 0, 00:07:16.987 "current_admin_qpairs": 0, 00:07:16.987 "current_io_qpairs": 0, 00:07:16.987 "pending_bdev_io": 0, 00:07:16.987 "completed_nvme_io": 0, 00:07:16.987 "transports": [] 00:07:16.987 }, 00:07:16.987 { 00:07:16.987 "name": "nvmf_tgt_poll_group_001", 00:07:16.987 "admin_qpairs": 0, 00:07:16.987 "io_qpairs": 0, 00:07:16.987 "current_admin_qpairs": 0, 00:07:16.987 "current_io_qpairs": 0, 00:07:16.987 "pending_bdev_io": 0, 00:07:16.987 "completed_nvme_io": 0, 00:07:16.987 "transports": [] 00:07:16.987 }, 00:07:16.987 { 00:07:16.987 "name": "nvmf_tgt_poll_group_002", 00:07:16.987 "admin_qpairs": 0, 00:07:16.987 "io_qpairs": 0, 00:07:16.987 "current_admin_qpairs": 0, 00:07:16.987 "current_io_qpairs": 0, 00:07:16.987 "pending_bdev_io": 0, 00:07:16.987 "completed_nvme_io": 0, 00:07:16.987 "transports": [] 00:07:16.987 }, 00:07:16.987 { 00:07:16.987 "name": "nvmf_tgt_poll_group_003", 00:07:16.987 "admin_qpairs": 0, 00:07:16.987 "io_qpairs": 0, 00:07:16.988 "current_admin_qpairs": 0, 00:07:16.988 "current_io_qpairs": 0, 00:07:16.988 "pending_bdev_io": 0, 00:07:16.988 "completed_nvme_io": 0, 00:07:16.988 "transports": [] 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 }' 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:07:16.988 12:46:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 [2024-07-15 12:46:35.028888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:07:16.988 "tick_rate": 2700000000, 00:07:16.988 "poll_groups": [ 00:07:16.988 { 00:07:16.988 "name": "nvmf_tgt_poll_group_000", 00:07:16.988 "admin_qpairs": 0, 00:07:16.988 "io_qpairs": 0, 00:07:16.988 "current_admin_qpairs": 0, 00:07:16.988 "current_io_qpairs": 0, 00:07:16.988 "pending_bdev_io": 0, 00:07:16.988 "completed_nvme_io": 0, 00:07:16.988 "transports": [ 00:07:16.988 { 00:07:16.988 "trtype": "TCP" 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 }, 00:07:16.988 { 00:07:16.988 "name": "nvmf_tgt_poll_group_001", 00:07:16.988 "admin_qpairs": 0, 00:07:16.988 "io_qpairs": 0, 00:07:16.988 "current_admin_qpairs": 0, 00:07:16.988 "current_io_qpairs": 0, 00:07:16.988 "pending_bdev_io": 0, 00:07:16.988 "completed_nvme_io": 0, 00:07:16.988 "transports": [ 00:07:16.988 { 00:07:16.988 "trtype": "TCP" 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 }, 00:07:16.988 { 00:07:16.988 "name": "nvmf_tgt_poll_group_002", 00:07:16.988 "admin_qpairs": 0, 00:07:16.988 "io_qpairs": 0, 00:07:16.988 "current_admin_qpairs": 0, 00:07:16.988 "current_io_qpairs": 0, 00:07:16.988 "pending_bdev_io": 0, 00:07:16.988 "completed_nvme_io": 0, 00:07:16.988 "transports": [ 00:07:16.988 { 00:07:16.988 "trtype": "TCP" 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 }, 00:07:16.988 { 00:07:16.988 "name": "nvmf_tgt_poll_group_003", 00:07:16.988 "admin_qpairs": 0, 00:07:16.988 "io_qpairs": 0, 00:07:16.988 "current_admin_qpairs": 0, 00:07:16.988 "current_io_qpairs": 0, 00:07:16.988 "pending_bdev_io": 0, 00:07:16.988 "completed_nvme_io": 0, 00:07:16.988 "transports": [ 00:07:16.988 { 00:07:16.988 "trtype": "TCP" 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 } 00:07:16.988 ] 00:07:16.988 }' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 Malloc1 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.988 [2024-07-15 12:46:35.182925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:16.988 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:07:17.247 [2024-07-15 12:46:35.205428] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:17.247 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:17.247 could not add new controller: failed to write to nvme-fabrics device 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.247 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.813 12:46:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:07:17.813 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.813 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:17.813 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:17.813 12:46:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:19.717 12:46:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:19.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:19.977 12:46:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.977 [2024-07-15 12:46:37.998150] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:07:19.977 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:19.977 could not add new controller: failed to write to nvme-fabrics device 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.977 12:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.544 12:46:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.544 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:20.544 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.544 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:20.544 12:46:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:22.448 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 [2024-07-15 12:46:40.791455] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.707 12:46:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.272 12:46:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.272 12:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:23.272 12:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.272 12:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:23.272 12:46:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:25.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.818 [2024-07-15 12:46:43.552165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.818 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.819 12:46:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.078 12:46:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.078 12:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.078 12:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.078 12:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.078 12:46:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:28.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 [2024-07-15 12:46:46.380196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.614 12:46:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:28.873 12:46:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:28.873 12:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:28.873 12:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:28.873 12:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:28.873 12:46:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:30.832 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.090 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 [2024-07-15 12:46:49.126846] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:31.091 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:31.661 12:46:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:31.661 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:31.661 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:31.661 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:31.661 12:46:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:33.565 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.823 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 [2024-07-15 12:46:51.864496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:33.824 12:46:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:34.391 12:46:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:34.391 12:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:34.391 12:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:34.391 12:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:34.391 12:46:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:36.288 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:36.288 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:36.288 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:36.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 [2024-07-15 12:46:54.652341] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 [2024-07-15 12:46:54.700396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.548 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.549 [2024-07-15 12:46:54.748542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.549 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.549 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.549 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.549 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 [2024-07-15 12:46:54.796699] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 [2024-07-15 12:46:54.844899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:36.808 "tick_rate": 2700000000, 00:07:36.808 "poll_groups": [ 00:07:36.808 { 00:07:36.808 "name": "nvmf_tgt_poll_group_000", 00:07:36.808 "admin_qpairs": 2, 00:07:36.808 "io_qpairs": 84, 00:07:36.808 "current_admin_qpairs": 0, 00:07:36.808 "current_io_qpairs": 0, 00:07:36.808 "pending_bdev_io": 0, 00:07:36.808 "completed_nvme_io": 130, 00:07:36.808 "transports": [ 00:07:36.808 { 00:07:36.808 "trtype": "TCP" 00:07:36.808 } 00:07:36.808 ] 00:07:36.808 }, 00:07:36.808 { 00:07:36.808 "name": "nvmf_tgt_poll_group_001", 00:07:36.808 "admin_qpairs": 2, 00:07:36.808 "io_qpairs": 84, 00:07:36.808 "current_admin_qpairs": 0, 00:07:36.808 "current_io_qpairs": 0, 00:07:36.808 "pending_bdev_io": 0, 00:07:36.808 "completed_nvme_io": 195, 00:07:36.808 "transports": [ 00:07:36.808 { 00:07:36.808 "trtype": "TCP" 00:07:36.808 } 00:07:36.808 ] 00:07:36.808 }, 00:07:36.808 { 00:07:36.808 "name": "nvmf_tgt_poll_group_002", 00:07:36.808 "admin_qpairs": 1, 00:07:36.808 "io_qpairs": 84, 00:07:36.808 "current_admin_qpairs": 0, 00:07:36.808 "current_io_qpairs": 0, 00:07:36.808 "pending_bdev_io": 0, 00:07:36.808 "completed_nvme_io": 141, 00:07:36.808 "transports": [ 00:07:36.808 { 00:07:36.808 "trtype": "TCP" 00:07:36.808 } 00:07:36.808 ] 00:07:36.808 }, 00:07:36.808 { 00:07:36.808 "name": "nvmf_tgt_poll_group_003", 00:07:36.808 "admin_qpairs": 2, 00:07:36.808 "io_qpairs": 84, 00:07:36.808 "current_admin_qpairs": 0, 00:07:36.808 "current_io_qpairs": 0, 00:07:36.808 "pending_bdev_io": 0, 00:07:36.808 "completed_nvme_io": 220, 00:07:36.808 "transports": [ 00:07:36.808 { 00:07:36.808 "trtype": "TCP" 00:07:36.808 } 00:07:36.808 ] 00:07:36.808 } 00:07:36.808 ] 00:07:36.808 }' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.808 12:46:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.808 rmmod nvme_tcp 00:07:36.808 rmmod nvme_fabrics 00:07:36.808 rmmod nvme_keyring 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3304281 ']' 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3304281 00:07:37.095 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3304281 ']' 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3304281 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3304281 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3304281' 00:07:37.096 killing process with pid 3304281 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3304281 00:07:37.096 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3304281 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:37.356 12:46:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.264 12:46:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:39.264 00:07:39.264 real 0m25.896s 00:07:39.264 user 1m24.129s 00:07:39.264 sys 0m4.254s 00:07:39.264 12:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.264 12:46:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 ************************************ 00:07:39.264 END TEST nvmf_rpc 00:07:39.264 ************************************ 00:07:39.264 12:46:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:39.264 12:46:57 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:39.264 12:46:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:39.264 12:46:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.264 12:46:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.264 ************************************ 00:07:39.264 START TEST nvmf_invalid 00:07:39.264 ************************************ 00:07:39.264 12:46:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:39.523 * Looking for test storage... 00:07:39.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.523 12:46:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.059 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:42.060 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:42.060 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:42.060 Found net devices under 0000:84:00.0: cvl_0_0 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:42.060 Found net devices under 0000:84:00.1: cvl_0_1 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:07:42.060 00:07:42.060 --- 10.0.0.2 ping statistics --- 00:07:42.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.060 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:42.060 00:07:42.060 --- 10.0.0.1 ping statistics --- 00:07:42.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.060 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3308922 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3308922 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3308922 ']' 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.060 12:46:59 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.060 [2024-07-15 12:46:59.917384] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:07:42.060 [2024-07-15 12:46:59.917474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.060 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.060 [2024-07-15 12:46:59.985992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.060 [2024-07-15 12:47:00.108202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.060 [2024-07-15 12:47:00.108267] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.060 [2024-07-15 12:47:00.108282] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.060 [2024-07-15 12:47:00.108292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.060 [2024-07-15 12:47:00.108302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.061 [2024-07-15 12:47:00.108384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.061 [2024-07-15 12:47:00.108451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.061 [2024-07-15 12:47:00.108477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.061 [2024-07-15 12:47:00.108480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:42.992 12:47:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11377 00:07:42.992 [2024-07-15 12:47:01.164567] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:42.992 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:42.992 { 00:07:42.992 "nqn": "nqn.2016-06.io.spdk:cnode11377", 00:07:42.992 "tgt_name": "foobar", 00:07:42.992 "method": "nvmf_create_subsystem", 00:07:42.992 "req_id": 1 00:07:42.992 } 00:07:42.992 Got JSON-RPC error response 00:07:42.992 response: 00:07:42.992 { 00:07:42.992 "code": -32603, 00:07:42.992 "message": "Unable to find target foobar" 00:07:42.992 }' 00:07:42.992 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:42.992 { 00:07:42.992 "nqn": "nqn.2016-06.io.spdk:cnode11377", 00:07:42.992 "tgt_name": "foobar", 00:07:42.992 "method": "nvmf_create_subsystem", 00:07:42.992 "req_id": 1 00:07:42.992 } 00:07:42.992 Got JSON-RPC error response 00:07:42.992 response: 00:07:42.992 { 00:07:42.992 "code": -32603, 00:07:42.992 "message": "Unable to find target foobar" 00:07:42.992 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:42.992 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:42.992 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21967 00:07:43.556 [2024-07-15 12:47:01.461567] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21967: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:43.556 { 00:07:43.556 "nqn": "nqn.2016-06.io.spdk:cnode21967", 00:07:43.556 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:43.556 "method": "nvmf_create_subsystem", 00:07:43.556 "req_id": 1 00:07:43.556 } 00:07:43.556 Got JSON-RPC error response 00:07:43.556 response: 00:07:43.556 { 00:07:43.556 "code": -32602, 00:07:43.556 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:43.556 }' 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:43.556 { 00:07:43.556 "nqn": "nqn.2016-06.io.spdk:cnode21967", 00:07:43.556 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:43.556 "method": "nvmf_create_subsystem", 00:07:43.556 "req_id": 1 00:07:43.556 } 00:07:43.556 Got JSON-RPC error response 00:07:43.556 response: 00:07:43.556 { 00:07:43.556 "code": -32602, 00:07:43.556 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:43.556 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8487 00:07:43.556 [2024-07-15 12:47:01.734481] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8487: invalid model number 'SPDK_Controller' 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:43.556 { 00:07:43.556 "nqn": "nqn.2016-06.io.spdk:cnode8487", 00:07:43.556 "model_number": "SPDK_Controller\u001f", 00:07:43.556 "method": "nvmf_create_subsystem", 00:07:43.556 "req_id": 1 00:07:43.556 } 00:07:43.556 Got JSON-RPC error response 00:07:43.556 response: 00:07:43.556 { 00:07:43.556 "code": -32602, 00:07:43.556 "message": "Invalid MN SPDK_Controller\u001f" 00:07:43.556 }' 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:43.556 { 00:07:43.556 "nqn": "nqn.2016-06.io.spdk:cnode8487", 00:07:43.556 "model_number": "SPDK_Controller\u001f", 00:07:43.556 "method": "nvmf_create_subsystem", 00:07:43.556 "req_id": 1 00:07:43.556 } 00:07:43.556 Got JSON-RPC error response 00:07:43.556 response: 00:07:43.556 { 00:07:43.556 "code": -32602, 00:07:43.556 "message": "Invalid MN SPDK_Controller\u001f" 00:07:43.556 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:43.556 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.557 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.814 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ue%&E1w3J^%@,~[] 9EYd' 00:07:43.815 12:47:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Ue%&E1w3J^%@,~[] 9EYd' nqn.2016-06.io.spdk:cnode30286 00:07:44.074 [2024-07-15 12:47:02.055565] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30286: invalid serial number 'Ue%&E1w3J^%@,~[] 9EYd' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:44.074 { 00:07:44.074 "nqn": "nqn.2016-06.io.spdk:cnode30286", 00:07:44.074 "serial_number": "Ue%&E1w3J^%@,~[] 9EYd", 00:07:44.074 "method": "nvmf_create_subsystem", 00:07:44.074 "req_id": 1 00:07:44.074 } 00:07:44.074 Got JSON-RPC error response 00:07:44.074 response: 00:07:44.074 { 00:07:44.074 "code": -32602, 00:07:44.074 "message": "Invalid SN Ue%&E1w3J^%@,~[] 9EYd" 00:07:44.074 }' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:44.074 { 00:07:44.074 "nqn": "nqn.2016-06.io.spdk:cnode30286", 00:07:44.074 "serial_number": "Ue%&E1w3J^%@,~[] 9EYd", 00:07:44.074 "method": "nvmf_create_subsystem", 00:07:44.074 "req_id": 1 00:07:44.074 } 00:07:44.074 Got JSON-RPC error response 00:07:44.074 response: 00:07:44.074 { 00:07:44.074 "code": -32602, 00:07:44.074 "message": "Invalid SN Ue%&E1w3J^%@,~[] 9EYd" 00:07:44.074 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.074 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.075 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\''"xYY(hcXVEq'\'' s<\"G16#5jYdab$Tc$A*WeEr8Hr' 00:07:44.076 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''"xYY(hcXVEq'\'' s<\"G16#5jYdab$Tc$A*WeEr8Hr' nqn.2016-06.io.spdk:cnode26085 00:07:44.333 [2024-07-15 12:47:02.448880] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26085: invalid model number ''"xYY(hcXVEq' s<\"G16#5jYdab$Tc$A*WeEr8Hr' 00:07:44.333 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:44.333 { 00:07:44.333 "nqn": "nqn.2016-06.io.spdk:cnode26085", 00:07:44.333 "model_number": "'\''\"xYY(hcXVEq'\'' s<\\\"G16#5jYdab$Tc$A*WeEr8Hr", 00:07:44.333 "method": "nvmf_create_subsystem", 00:07:44.333 "req_id": 1 00:07:44.333 } 00:07:44.333 Got JSON-RPC error response 00:07:44.333 response: 00:07:44.333 { 00:07:44.333 "code": -32602, 00:07:44.333 "message": "Invalid MN '\''\"xYY(hcXVEq'\'' s<\\\"G16#5jYdab$Tc$A*WeEr8Hr" 00:07:44.333 }' 00:07:44.333 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:44.333 { 00:07:44.333 "nqn": "nqn.2016-06.io.spdk:cnode26085", 00:07:44.334 "model_number": "'\"xYY(hcXVEq' s<\\\"G16#5jYdab$Tc$A*WeEr8Hr", 00:07:44.334 "method": "nvmf_create_subsystem", 00:07:44.334 "req_id": 1 00:07:44.334 } 00:07:44.334 Got JSON-RPC error response 00:07:44.334 response: 00:07:44.334 { 00:07:44.334 "code": -32602, 00:07:44.334 "message": "Invalid MN '\"xYY(hcXVEq' s<\\\"G16#5jYdab$Tc$A*WeEr8Hr" 00:07:44.334 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:44.334 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:44.591 [2024-07-15 12:47:02.689785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.591 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:44.849 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:44.849 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:44.849 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:44.849 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:44.849 12:47:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:45.107 [2024-07-15 12:47:03.195481] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:45.107 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:45.107 { 00:07:45.107 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:45.107 "listen_address": { 00:07:45.107 "trtype": "tcp", 00:07:45.107 "traddr": "", 00:07:45.107 "trsvcid": "4421" 00:07:45.107 }, 00:07:45.107 "method": "nvmf_subsystem_remove_listener", 00:07:45.107 "req_id": 1 00:07:45.107 } 00:07:45.107 Got JSON-RPC error response 00:07:45.107 response: 00:07:45.107 { 00:07:45.107 "code": -32602, 00:07:45.107 "message": "Invalid parameters" 00:07:45.107 }' 00:07:45.107 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:45.107 { 00:07:45.107 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:45.107 "listen_address": { 00:07:45.107 "trtype": "tcp", 00:07:45.107 "traddr": "", 00:07:45.107 "trsvcid": "4421" 00:07:45.107 }, 00:07:45.107 "method": "nvmf_subsystem_remove_listener", 00:07:45.107 "req_id": 1 00:07:45.107 } 00:07:45.107 Got JSON-RPC error response 00:07:45.107 response: 00:07:45.107 { 00:07:45.107 "code": -32602, 00:07:45.107 "message": "Invalid parameters" 00:07:45.107 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:45.107 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode173 -i 0 00:07:45.365 [2024-07-15 12:47:03.444226] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode173: invalid cntlid range [0-65519] 00:07:45.365 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:45.365 { 00:07:45.365 "nqn": "nqn.2016-06.io.spdk:cnode173", 00:07:45.365 "min_cntlid": 0, 00:07:45.365 "method": "nvmf_create_subsystem", 00:07:45.365 "req_id": 1 00:07:45.365 } 00:07:45.365 Got JSON-RPC error response 00:07:45.365 response: 00:07:45.365 { 00:07:45.365 "code": -32602, 00:07:45.365 "message": "Invalid cntlid range [0-65519]" 00:07:45.365 }' 00:07:45.365 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:45.365 { 00:07:45.365 "nqn": "nqn.2016-06.io.spdk:cnode173", 00:07:45.365 "min_cntlid": 0, 00:07:45.365 "method": "nvmf_create_subsystem", 00:07:45.365 "req_id": 1 00:07:45.365 } 00:07:45.365 Got JSON-RPC error response 00:07:45.365 response: 00:07:45.365 { 00:07:45.365 "code": -32602, 00:07:45.365 "message": "Invalid cntlid range [0-65519]" 00:07:45.365 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.365 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28934 -i 65520 00:07:45.622 [2024-07-15 12:47:03.689008] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28934: invalid cntlid range [65520-65519] 00:07:45.622 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:45.622 { 00:07:45.622 "nqn": "nqn.2016-06.io.spdk:cnode28934", 00:07:45.622 "min_cntlid": 65520, 00:07:45.622 "method": "nvmf_create_subsystem", 00:07:45.622 "req_id": 1 00:07:45.622 } 00:07:45.622 Got JSON-RPC error response 00:07:45.622 response: 00:07:45.622 { 00:07:45.622 "code": -32602, 00:07:45.622 "message": "Invalid cntlid range [65520-65519]" 00:07:45.622 }' 00:07:45.622 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:45.622 { 00:07:45.622 "nqn": "nqn.2016-06.io.spdk:cnode28934", 00:07:45.622 "min_cntlid": 65520, 00:07:45.622 "method": "nvmf_create_subsystem", 00:07:45.622 "req_id": 1 00:07:45.622 } 00:07:45.622 Got JSON-RPC error response 00:07:45.622 response: 00:07:45.622 { 00:07:45.622 "code": -32602, 00:07:45.622 "message": "Invalid cntlid range [65520-65519]" 00:07:45.622 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.622 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15528 -I 0 00:07:45.880 [2024-07-15 12:47:03.937856] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15528: invalid cntlid range [1-0] 00:07:45.880 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:45.880 { 00:07:45.880 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:07:45.880 "max_cntlid": 0, 00:07:45.880 "method": "nvmf_create_subsystem", 00:07:45.880 "req_id": 1 00:07:45.880 } 00:07:45.880 Got JSON-RPC error response 00:07:45.880 response: 00:07:45.880 { 00:07:45.880 "code": -32602, 00:07:45.880 "message": "Invalid cntlid range [1-0]" 00:07:45.880 }' 00:07:45.880 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:45.880 { 00:07:45.880 "nqn": "nqn.2016-06.io.spdk:cnode15528", 00:07:45.880 "max_cntlid": 0, 00:07:45.880 "method": "nvmf_create_subsystem", 00:07:45.880 "req_id": 1 00:07:45.880 } 00:07:45.880 Got JSON-RPC error response 00:07:45.880 response: 00:07:45.880 { 00:07:45.880 "code": -32602, 00:07:45.880 "message": "Invalid cntlid range [1-0]" 00:07:45.880 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:45.880 12:47:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23071 -I 65520 00:07:46.138 [2024-07-15 12:47:04.210787] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23071: invalid cntlid range [1-65520] 00:07:46.138 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:46.138 { 00:07:46.138 "nqn": "nqn.2016-06.io.spdk:cnode23071", 00:07:46.138 "max_cntlid": 65520, 00:07:46.138 "method": "nvmf_create_subsystem", 00:07:46.138 "req_id": 1 00:07:46.138 } 00:07:46.138 Got JSON-RPC error response 00:07:46.138 response: 00:07:46.138 { 00:07:46.138 "code": -32602, 00:07:46.138 "message": "Invalid cntlid range [1-65520]" 00:07:46.138 }' 00:07:46.138 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:46.138 { 00:07:46.138 "nqn": "nqn.2016-06.io.spdk:cnode23071", 00:07:46.138 "max_cntlid": 65520, 00:07:46.138 "method": "nvmf_create_subsystem", 00:07:46.138 "req_id": 1 00:07:46.138 } 00:07:46.138 Got JSON-RPC error response 00:07:46.138 response: 00:07:46.138 { 00:07:46.138 "code": -32602, 00:07:46.138 "message": "Invalid cntlid range [1-65520]" 00:07:46.138 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.138 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7211 -i 6 -I 5 00:07:46.396 [2024-07-15 12:47:04.451560] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7211: invalid cntlid range [6-5] 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:46.396 { 00:07:46.396 "nqn": "nqn.2016-06.io.spdk:cnode7211", 00:07:46.396 "min_cntlid": 6, 00:07:46.396 "max_cntlid": 5, 00:07:46.396 "method": "nvmf_create_subsystem", 00:07:46.396 "req_id": 1 00:07:46.396 } 00:07:46.396 Got JSON-RPC error response 00:07:46.396 response: 00:07:46.396 { 00:07:46.396 "code": -32602, 00:07:46.396 "message": "Invalid cntlid range [6-5]" 00:07:46.396 }' 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:46.396 { 00:07:46.396 "nqn": "nqn.2016-06.io.spdk:cnode7211", 00:07:46.396 "min_cntlid": 6, 00:07:46.396 "max_cntlid": 5, 00:07:46.396 "method": "nvmf_create_subsystem", 00:07:46.396 "req_id": 1 00:07:46.396 } 00:07:46.396 Got JSON-RPC error response 00:07:46.396 response: 00:07:46.396 { 00:07:46.396 "code": -32602, 00:07:46.396 "message": "Invalid cntlid range [6-5]" 00:07:46.396 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:46.396 { 00:07:46.396 "name": "foobar", 00:07:46.396 "method": "nvmf_delete_target", 00:07:46.396 "req_id": 1 00:07:46.396 } 00:07:46.396 Got JSON-RPC error response 00:07:46.396 response: 00:07:46.396 { 00:07:46.396 "code": -32602, 00:07:46.396 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:46.396 }' 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:46.396 { 00:07:46.396 "name": "foobar", 00:07:46.396 "method": "nvmf_delete_target", 00:07:46.396 "req_id": 1 00:07:46.396 } 00:07:46.396 Got JSON-RPC error response 00:07:46.396 response: 00:07:46.396 { 00:07:46.396 "code": -32602, 00:07:46.396 "message": "The specified target doesn't exist, cannot delete it." 00:07:46.396 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:46.396 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:46.396 rmmod nvme_tcp 00:07:46.656 rmmod nvme_fabrics 00:07:46.656 rmmod nvme_keyring 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3308922 ']' 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3308922 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3308922 ']' 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3308922 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3308922 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3308922' 00:07:46.656 killing process with pid 3308922 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3308922 00:07:46.656 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3308922 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.915 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.916 12:47:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.821 12:47:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.821 00:07:48.821 real 0m9.540s 00:07:48.821 user 0m23.104s 00:07:48.822 sys 0m2.630s 00:07:48.822 12:47:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.822 12:47:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:48.822 ************************************ 00:07:48.822 END TEST nvmf_invalid 00:07:48.822 ************************************ 00:07:48.822 12:47:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:48.822 12:47:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:48.822 12:47:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.822 12:47:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.822 12:47:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.079 ************************************ 00:07:49.079 START TEST nvmf_abort 00:07:49.079 ************************************ 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:49.079 * Looking for test storage... 00:07:49.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.079 12:47:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.080 12:47:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:51.610 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:51.610 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:51.610 Found net devices under 0000:84:00.0: cvl_0_0 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:51.610 Found net devices under 0000:84:00.1: cvl_0_1 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:07:51.610 00:07:51.610 --- 10.0.0.2 ping statistics --- 00:07:51.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.610 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:51.610 00:07:51.610 --- 10.0.0.1 ping statistics --- 00:07:51.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.610 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.610 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3311587 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3311587 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3311587 ']' 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 [2024-07-15 12:47:09.451904] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:07:51.611 [2024-07-15 12:47:09.451988] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.611 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.611 [2024-07-15 12:47:09.514979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.611 [2024-07-15 12:47:09.615552] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.611 [2024-07-15 12:47:09.615617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.611 [2024-07-15 12:47:09.615645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.611 [2024-07-15 12:47:09.615656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.611 [2024-07-15 12:47:09.615666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.611 [2024-07-15 12:47:09.615792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.611 [2024-07-15 12:47:09.615826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.611 [2024-07-15 12:47:09.615829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 [2024-07-15 12:47:09.759937] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 Malloc0 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.611 Delay0 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.611 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.869 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.869 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:51.869 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.869 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.869 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.870 [2024-07-15 12:47:09.830435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.870 12:47:09 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:51.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.870 [2024-07-15 12:47:09.895405] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:54.406 Initializing NVMe Controllers 00:07:54.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:54.406 controller IO queue size 128 less than required 00:07:54.406 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:54.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:54.406 Initialization complete. Launching workers. 00:07:54.406 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33495 00:07:54.406 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33556, failed to submit 62 00:07:54.406 success 33499, unsuccess 57, failed 0 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:54.406 rmmod nvme_tcp 00:07:54.406 rmmod nvme_fabrics 00:07:54.406 rmmod nvme_keyring 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3311587 ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3311587 ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3311587' 00:07:54.406 killing process with pid 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3311587 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.406 12:47:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.332 12:47:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.332 00:07:56.332 real 0m7.449s 00:07:56.332 user 0m10.617s 00:07:56.332 sys 0m2.662s 00:07:56.332 12:47:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.332 12:47:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.332 ************************************ 00:07:56.332 END TEST nvmf_abort 00:07:56.332 ************************************ 00:07:56.332 12:47:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:56.332 12:47:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.332 12:47:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:56.332 12:47:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.332 12:47:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.332 ************************************ 00:07:56.332 START TEST nvmf_ns_hotplug_stress 00:07:56.332 ************************************ 00:07:56.332 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:56.591 * Looking for test storage... 00:07:56.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.591 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.592 12:47:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:59.124 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:59.125 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:59.125 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:59.125 Found net devices under 0000:84:00.0: cvl_0_0 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:59.125 Found net devices under 0000:84:00.1: cvl_0_1 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:59.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:07:59.125 00:07:59.125 --- 10.0.0.2 ping statistics --- 00:07:59.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.125 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:59.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:07:59.125 00:07:59.125 --- 10.0.0.1 ping statistics --- 00:07:59.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.125 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3313946 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3313946 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3313946 ']' 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.125 12:47:16 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.125 [2024-07-15 12:47:16.939514] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:07:59.125 [2024-07-15 12:47:16.939591] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.125 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.125 [2024-07-15 12:47:17.002900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:59.125 [2024-07-15 12:47:17.103332] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.125 [2024-07-15 12:47:17.103392] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.125 [2024-07-15 12:47:17.103419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.125 [2024-07-15 12:47:17.103430] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.126 [2024-07-15 12:47:17.103440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.126 [2024-07-15 12:47:17.103529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.126 [2024-07-15 12:47:17.103572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.126 [2024-07-15 12:47:17.103575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:59.126 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.384 [2024-07-15 12:47:17.521919] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.384 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:59.642 12:47:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.899 [2024-07-15 12:47:18.072763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.899 12:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:00.156 12:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:00.414 Malloc0 00:08:00.414 12:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:00.672 Delay0 00:08:00.672 12:47:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.930 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:01.188 NULL1 00:08:01.188 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:01.446 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3314250 00:08:01.446 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:01.446 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:01.446 12:47:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.446 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.826 Read completed with error (sct=0, sc=11) 00:08:02.826 12:47:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.826 12:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:02.826 12:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:03.393 true 00:08:03.393 12:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:03.393 12:47:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.960 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.218 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:04.218 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:04.476 true 00:08:04.476 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:04.476 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.734 12:47:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.993 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:04.993 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:05.250 true 00:08:05.250 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:05.250 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.507 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.764 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:05.764 12:47:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:06.022 true 00:08:06.022 12:47:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:06.022 12:47:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:06.959 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.959 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.217 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:07.217 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:07.476 true 00:08:07.476 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:07.476 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.733 12:47:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.991 12:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:07.991 12:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:08.249 true 00:08:08.249 12:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:08.249 12:47:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:09.182 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.439 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:09.439 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:09.696 true 00:08:09.696 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:09.696 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.954 12:47:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.211 12:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:10.211 12:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:10.468 true 00:08:10.468 12:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:10.468 12:47:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.402 12:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:11.659 12:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:11.659 12:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:11.917 true 00:08:11.917 12:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:11.917 12:47:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.175 12:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.433 12:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:12.433 12:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:12.690 true 00:08:12.690 12:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:12.690 12:47:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.621 12:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.621 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.621 12:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:13.621 12:47:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:13.878 true 00:08:13.878 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:13.878 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.135 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.392 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:14.392 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:14.649 true 00:08:14.649 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:14.649 12:47:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.579 12:47:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.836 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:15.836 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:16.094 true 00:08:16.094 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:16.094 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.350 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.607 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:16.607 12:47:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:16.865 true 00:08:16.865 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:16.865 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.122 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.381 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:17.381 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:17.642 true 00:08:17.642 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:17.642 12:47:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.622 12:47:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.622 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.880 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:18.880 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:19.139 true 00:08:19.139 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:19.139 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.397 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.655 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:19.655 12:47:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:19.913 true 00:08:19.913 12:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:19.913 12:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.850 12:47:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.108 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:21.108 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:21.367 true 00:08:21.367 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:21.367 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.625 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.884 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:21.884 12:47:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:22.142 true 00:08:22.142 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:22.142 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.400 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.658 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:22.658 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:22.916 true 00:08:22.916 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:22.916 12:47:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.851 12:47:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.851 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.110 12:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:24.110 12:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:24.368 true 00:08:24.368 12:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:24.368 12:47:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.299 12:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.556 12:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:25.556 12:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:25.813 true 00:08:25.813 12:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:25.813 12:47:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.071 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.329 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:26.329 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:26.586 true 00:08:26.586 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:26.586 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.844 12:47:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.101 12:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:27.101 12:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:27.359 true 00:08:27.359 12:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:27.359 12:47:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.293 12:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.549 12:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:28.549 12:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:28.806 true 00:08:28.806 12:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:28.806 12:47:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.063 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.321 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:29.321 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:29.578 true 00:08:29.578 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:29.578 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.835 12:47:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.093 12:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:30.093 12:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:30.350 true 00:08:30.350 12:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:30.350 12:47:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.290 12:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.290 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:31.547 12:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:31.547 12:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:31.805 true 00:08:31.805 12:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:31.805 12:47:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.805 Initializing NVMe Controllers 00:08:31.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.805 Controller IO queue size 128, less than required. 00:08:31.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.805 Controller IO queue size 128, less than required. 00:08:31.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:31.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:31.805 Initialization complete. Launching workers. 00:08:31.805 ======================================================== 00:08:31.805 Latency(us) 00:08:31.805 Device Information : IOPS MiB/s Average min max 00:08:31.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1045.44 0.51 60560.80 2958.64 1026876.53 00:08:31.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10737.26 5.24 11922.18 2910.35 450012.35 00:08:31.805 ======================================================== 00:08:31.805 Total : 11782.69 5.75 16237.71 2910.35 1026876.53 00:08:31.805 00:08:32.062 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.318 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:32.318 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:32.628 true 00:08:32.628 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3314250 00:08:32.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3314250) - No such process 00:08:32.628 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3314250 00:08:32.628 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.629 12:47:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:32.887 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:32.887 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:32.887 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:32.887 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:32.887 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:33.145 null0 00:08:33.145 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.145 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.145 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:33.402 null1 00:08:33.402 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.402 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.402 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:33.659 null2 00:08:33.659 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.659 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.659 12:47:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:33.916 null3 00:08:33.916 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:33.916 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:33.916 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:34.172 null4 00:08:34.172 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.172 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.172 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:34.429 null5 00:08:34.429 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.429 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.429 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:34.687 null6 00:08:34.687 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.687 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.687 12:47:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:34.945 null7 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:34.945 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3318303 3318304 3318306 3318308 3318310 3318312 3318314 3318316 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:34.946 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.205 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.464 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:35.722 12:47:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:35.980 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.236 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.493 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.750 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:36.750 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:36.750 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.750 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.750 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.005 12:47:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.263 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.521 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.852 12:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.852 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.852 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.852 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.109 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.109 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.110 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.110 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.110 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.367 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.625 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.882 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.883 12:47:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.140 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.397 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.654 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.911 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.912 12:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.168 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.426 rmmod nvme_tcp 00:08:40.426 rmmod nvme_fabrics 00:08:40.426 rmmod nvme_keyring 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3313946 ']' 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3313946 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3313946 ']' 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3313946 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:40.426 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3313946 00:08:40.683 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:40.683 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:40.683 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3313946' 00:08:40.683 killing process with pid 3313946 00:08:40.683 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3313946 00:08:40.683 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3313946 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.940 12:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.838 12:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.838 00:08:42.838 real 0m46.447s 00:08:42.838 user 3m31.381s 00:08:42.838 sys 0m16.658s 00:08:42.838 12:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.838 12:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 ************************************ 00:08:42.838 END TEST nvmf_ns_hotplug_stress 00:08:42.838 ************************************ 00:08:42.838 12:48:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:42.838 12:48:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:42.838 12:48:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.838 12:48:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.838 12:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 ************************************ 00:08:42.838 START TEST nvmf_connect_stress 00:08:42.838 ************************************ 00:08:42.838 12:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:43.096 * Looking for test storage... 00:08:43.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.096 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.097 12:48:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:45.023 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:45.023 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:45.023 Found net devices under 0000:84:00.0: cvl_0_0 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:45.023 Found net devices under 0000:84:00.1: cvl_0_1 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.023 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:08:45.280 00:08:45.280 --- 10.0.0.2 ping statistics --- 00:08:45.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.280 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:08:45.280 00:08:45.280 --- 10.0.0.1 ping statistics --- 00:08:45.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.280 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.280 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3321198 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3321198 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3321198 ']' 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.281 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.281 [2024-07-15 12:48:03.407447] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:08:45.281 [2024-07-15 12:48:03.407520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.281 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.281 [2024-07-15 12:48:03.473537] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.537 [2024-07-15 12:48:03.585288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.537 [2024-07-15 12:48:03.585362] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.537 [2024-07-15 12:48:03.585376] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.537 [2024-07-15 12:48:03.585401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.537 [2024-07-15 12:48:03.585412] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.537 [2024-07-15 12:48:03.585496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.537 [2024-07-15 12:48:03.585541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.537 [2024-07-15 12:48:03.585543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.537 [2024-07-15 12:48:03.736358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.537 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 [2024-07-15 12:48:03.769910] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.793 NULL1 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3321338 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.793 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.794 12:48:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.050 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.050 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:46.050 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.050 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.050 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.306 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.306 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:46.306 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.306 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.306 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.868 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.868 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:46.868 12:48:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:46.868 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.868 12:48:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.125 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.125 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:47.125 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.125 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.125 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.382 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.382 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:47.382 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.382 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.382 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.639 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.639 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:47.639 12:48:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.639 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.639 12:48:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:47.896 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.896 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:47.896 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:47.896 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.896 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.462 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.462 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:48.462 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.462 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.462 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.720 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.720 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:48.720 12:48:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.720 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.720 12:48:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:48.978 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.978 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:48.978 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:48.978 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.978 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.236 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.237 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:49.237 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.237 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.237 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:49.495 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.495 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:49.495 12:48:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:49.495 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.495 12:48:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.064 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.064 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:50.064 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.064 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.064 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.323 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.323 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:50.323 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.323 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.323 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.581 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.581 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:50.581 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.581 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.581 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:50.839 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.839 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:50.839 12:48:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:50.839 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.839 12:48:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.098 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.098 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:51.098 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.098 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.098 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.664 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.664 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:51.664 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.664 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.664 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:51.924 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:51.924 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:51.924 12:48:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:51.924 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.924 12:48:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.184 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.184 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:52.184 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.184 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.184 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.444 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.444 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:52.444 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.444 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.444 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:52.703 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.703 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:52.703 12:48:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:52.703 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.703 12:48:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.272 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.272 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:53.272 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.272 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.272 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.531 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.531 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:53.531 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.531 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.531 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:53.790 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.790 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:53.790 12:48:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:53.790 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.791 12:48:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.049 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.049 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:54.049 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.049 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.049 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.307 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.307 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:54.307 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.307 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.307 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:54.874 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.874 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:54.874 12:48:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:54.874 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.874 12:48:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.132 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.132 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:55.132 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.132 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.132 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.390 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.390 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:55.390 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.390 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.390 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.649 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.649 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:55.649 12:48:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:55.649 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.649 12:48:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:55.908 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.908 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.908 12:48:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3321338 00:08:55.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3321338) - No such process 00:08:55.908 12:48:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3321338 00:08:55.908 12:48:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.168 rmmod nvme_tcp 00:08:56.168 rmmod nvme_fabrics 00:08:56.168 rmmod nvme_keyring 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3321198 ']' 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3321198 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3321198 ']' 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3321198 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3321198 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3321198' 00:08:56.168 killing process with pid 3321198 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3321198 00:08:56.168 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3321198 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.426 12:48:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.332 12:48:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.332 00:08:58.332 real 0m15.493s 00:08:58.332 user 0m38.102s 00:08:58.332 sys 0m6.409s 00:08:58.332 12:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.332 12:48:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:58.332 ************************************ 00:08:58.332 END TEST nvmf_connect_stress 00:08:58.332 ************************************ 00:08:58.591 12:48:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.591 12:48:16 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.591 12:48:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.591 12:48:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.591 12:48:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.591 ************************************ 00:08:58.591 START TEST nvmf_fused_ordering 00:08:58.591 ************************************ 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:58.591 * Looking for test storage... 00:08:58.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.591 12:48:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:01.152 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:01.153 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:01.153 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:01.153 Found net devices under 0000:84:00.0: cvl_0_0 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:01.153 Found net devices under 0000:84:00.1: cvl_0_1 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:01.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:09:01.153 00:09:01.153 --- 10.0.0.2 ping statistics --- 00:09:01.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.153 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:01.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:09:01.153 00:09:01.153 --- 10.0.0.1 ping statistics --- 00:09:01.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.153 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3325014 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3325014 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3325014 ']' 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.153 12:48:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.153 [2024-07-15 12:48:18.988955] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:01.153 [2024-07-15 12:48:18.989048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.153 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.153 [2024-07-15 12:48:19.052786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.153 [2024-07-15 12:48:19.154055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.153 [2024-07-15 12:48:19.154107] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.153 [2024-07-15 12:48:19.154135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.153 [2024-07-15 12:48:19.154146] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.154 [2024-07-15 12:48:19.154155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.154 [2024-07-15 12:48:19.154187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 [2024-07-15 12:48:19.293461] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 [2024-07-15 12:48:19.309626] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 NULL1 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:01.154 12:48:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:01.154 [2024-07-15 12:48:19.353471] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:01.154 [2024-07-15 12:48:19.353508] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325139 ] 00:09:01.413 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.673 Attached to nqn.2016-06.io.spdk:cnode1 00:09:01.673 Namespace ID: 1 size: 1GB 00:09:01.673 fused_ordering(0) 00:09:01.673 fused_ordering(1) 00:09:01.673 fused_ordering(2) 00:09:01.673 fused_ordering(3) 00:09:01.673 fused_ordering(4) 00:09:01.673 fused_ordering(5) 00:09:01.673 fused_ordering(6) 00:09:01.673 fused_ordering(7) 00:09:01.673 fused_ordering(8) 00:09:01.673 fused_ordering(9) 00:09:01.673 fused_ordering(10) 00:09:01.673 fused_ordering(11) 00:09:01.673 fused_ordering(12) 00:09:01.673 fused_ordering(13) 00:09:01.673 fused_ordering(14) 00:09:01.673 fused_ordering(15) 00:09:01.673 fused_ordering(16) 00:09:01.673 fused_ordering(17) 00:09:01.673 fused_ordering(18) 00:09:01.673 fused_ordering(19) 00:09:01.673 fused_ordering(20) 00:09:01.673 fused_ordering(21) 00:09:01.673 fused_ordering(22) 00:09:01.673 fused_ordering(23) 00:09:01.673 fused_ordering(24) 00:09:01.673 fused_ordering(25) 00:09:01.673 fused_ordering(26) 00:09:01.673 fused_ordering(27) 00:09:01.673 fused_ordering(28) 00:09:01.673 fused_ordering(29) 00:09:01.673 fused_ordering(30) 00:09:01.673 fused_ordering(31) 00:09:01.673 fused_ordering(32) 00:09:01.673 fused_ordering(33) 00:09:01.673 fused_ordering(34) 00:09:01.673 fused_ordering(35) 00:09:01.673 fused_ordering(36) 00:09:01.673 fused_ordering(37) 00:09:01.673 fused_ordering(38) 00:09:01.673 fused_ordering(39) 00:09:01.673 fused_ordering(40) 00:09:01.673 fused_ordering(41) 00:09:01.673 fused_ordering(42) 00:09:01.673 fused_ordering(43) 00:09:01.673 fused_ordering(44) 00:09:01.673 fused_ordering(45) 00:09:01.673 fused_ordering(46) 00:09:01.673 fused_ordering(47) 00:09:01.673 fused_ordering(48) 00:09:01.673 fused_ordering(49) 00:09:01.673 fused_ordering(50) 00:09:01.673 fused_ordering(51) 00:09:01.673 fused_ordering(52) 00:09:01.674 fused_ordering(53) 00:09:01.674 fused_ordering(54) 00:09:01.674 fused_ordering(55) 00:09:01.674 fused_ordering(56) 00:09:01.674 fused_ordering(57) 00:09:01.674 fused_ordering(58) 00:09:01.674 fused_ordering(59) 00:09:01.674 fused_ordering(60) 00:09:01.674 fused_ordering(61) 00:09:01.674 fused_ordering(62) 00:09:01.674 fused_ordering(63) 00:09:01.674 fused_ordering(64) 00:09:01.674 fused_ordering(65) 00:09:01.674 fused_ordering(66) 00:09:01.674 fused_ordering(67) 00:09:01.674 fused_ordering(68) 00:09:01.674 fused_ordering(69) 00:09:01.674 fused_ordering(70) 00:09:01.674 fused_ordering(71) 00:09:01.674 fused_ordering(72) 00:09:01.674 fused_ordering(73) 00:09:01.674 fused_ordering(74) 00:09:01.674 fused_ordering(75) 00:09:01.674 fused_ordering(76) 00:09:01.674 fused_ordering(77) 00:09:01.674 fused_ordering(78) 00:09:01.674 fused_ordering(79) 00:09:01.674 fused_ordering(80) 00:09:01.674 fused_ordering(81) 00:09:01.674 fused_ordering(82) 00:09:01.674 fused_ordering(83) 00:09:01.674 fused_ordering(84) 00:09:01.674 fused_ordering(85) 00:09:01.674 fused_ordering(86) 00:09:01.674 fused_ordering(87) 00:09:01.674 fused_ordering(88) 00:09:01.674 fused_ordering(89) 00:09:01.674 fused_ordering(90) 00:09:01.674 fused_ordering(91) 00:09:01.674 fused_ordering(92) 00:09:01.674 fused_ordering(93) 00:09:01.674 fused_ordering(94) 00:09:01.674 fused_ordering(95) 00:09:01.674 fused_ordering(96) 00:09:01.674 fused_ordering(97) 00:09:01.674 fused_ordering(98) 00:09:01.674 fused_ordering(99) 00:09:01.674 fused_ordering(100) 00:09:01.674 fused_ordering(101) 00:09:01.674 fused_ordering(102) 00:09:01.674 fused_ordering(103) 00:09:01.674 fused_ordering(104) 00:09:01.674 fused_ordering(105) 00:09:01.674 fused_ordering(106) 00:09:01.674 fused_ordering(107) 00:09:01.674 fused_ordering(108) 00:09:01.674 fused_ordering(109) 00:09:01.674 fused_ordering(110) 00:09:01.674 fused_ordering(111) 00:09:01.674 fused_ordering(112) 00:09:01.674 fused_ordering(113) 00:09:01.674 fused_ordering(114) 00:09:01.674 fused_ordering(115) 00:09:01.674 fused_ordering(116) 00:09:01.674 fused_ordering(117) 00:09:01.674 fused_ordering(118) 00:09:01.674 fused_ordering(119) 00:09:01.674 fused_ordering(120) 00:09:01.674 fused_ordering(121) 00:09:01.674 fused_ordering(122) 00:09:01.674 fused_ordering(123) 00:09:01.674 fused_ordering(124) 00:09:01.674 fused_ordering(125) 00:09:01.674 fused_ordering(126) 00:09:01.674 fused_ordering(127) 00:09:01.674 fused_ordering(128) 00:09:01.674 fused_ordering(129) 00:09:01.674 fused_ordering(130) 00:09:01.674 fused_ordering(131) 00:09:01.674 fused_ordering(132) 00:09:01.674 fused_ordering(133) 00:09:01.674 fused_ordering(134) 00:09:01.674 fused_ordering(135) 00:09:01.674 fused_ordering(136) 00:09:01.674 fused_ordering(137) 00:09:01.674 fused_ordering(138) 00:09:01.674 fused_ordering(139) 00:09:01.674 fused_ordering(140) 00:09:01.674 fused_ordering(141) 00:09:01.674 fused_ordering(142) 00:09:01.674 fused_ordering(143) 00:09:01.674 fused_ordering(144) 00:09:01.674 fused_ordering(145) 00:09:01.674 fused_ordering(146) 00:09:01.674 fused_ordering(147) 00:09:01.674 fused_ordering(148) 00:09:01.674 fused_ordering(149) 00:09:01.674 fused_ordering(150) 00:09:01.674 fused_ordering(151) 00:09:01.674 fused_ordering(152) 00:09:01.674 fused_ordering(153) 00:09:01.674 fused_ordering(154) 00:09:01.674 fused_ordering(155) 00:09:01.674 fused_ordering(156) 00:09:01.674 fused_ordering(157) 00:09:01.674 fused_ordering(158) 00:09:01.674 fused_ordering(159) 00:09:01.674 fused_ordering(160) 00:09:01.674 fused_ordering(161) 00:09:01.674 fused_ordering(162) 00:09:01.674 fused_ordering(163) 00:09:01.674 fused_ordering(164) 00:09:01.674 fused_ordering(165) 00:09:01.674 fused_ordering(166) 00:09:01.674 fused_ordering(167) 00:09:01.674 fused_ordering(168) 00:09:01.674 fused_ordering(169) 00:09:01.674 fused_ordering(170) 00:09:01.674 fused_ordering(171) 00:09:01.674 fused_ordering(172) 00:09:01.674 fused_ordering(173) 00:09:01.674 fused_ordering(174) 00:09:01.674 fused_ordering(175) 00:09:01.674 fused_ordering(176) 00:09:01.674 fused_ordering(177) 00:09:01.674 fused_ordering(178) 00:09:01.674 fused_ordering(179) 00:09:01.674 fused_ordering(180) 00:09:01.674 fused_ordering(181) 00:09:01.674 fused_ordering(182) 00:09:01.674 fused_ordering(183) 00:09:01.674 fused_ordering(184) 00:09:01.674 fused_ordering(185) 00:09:01.674 fused_ordering(186) 00:09:01.674 fused_ordering(187) 00:09:01.674 fused_ordering(188) 00:09:01.674 fused_ordering(189) 00:09:01.674 fused_ordering(190) 00:09:01.674 fused_ordering(191) 00:09:01.674 fused_ordering(192) 00:09:01.674 fused_ordering(193) 00:09:01.674 fused_ordering(194) 00:09:01.674 fused_ordering(195) 00:09:01.674 fused_ordering(196) 00:09:01.674 fused_ordering(197) 00:09:01.674 fused_ordering(198) 00:09:01.674 fused_ordering(199) 00:09:01.674 fused_ordering(200) 00:09:01.674 fused_ordering(201) 00:09:01.674 fused_ordering(202) 00:09:01.674 fused_ordering(203) 00:09:01.674 fused_ordering(204) 00:09:01.674 fused_ordering(205) 00:09:01.932 fused_ordering(206) 00:09:01.932 fused_ordering(207) 00:09:01.932 fused_ordering(208) 00:09:01.932 fused_ordering(209) 00:09:01.932 fused_ordering(210) 00:09:01.932 fused_ordering(211) 00:09:01.932 fused_ordering(212) 00:09:01.932 fused_ordering(213) 00:09:01.932 fused_ordering(214) 00:09:01.932 fused_ordering(215) 00:09:01.932 fused_ordering(216) 00:09:01.932 fused_ordering(217) 00:09:01.932 fused_ordering(218) 00:09:01.932 fused_ordering(219) 00:09:01.932 fused_ordering(220) 00:09:01.932 fused_ordering(221) 00:09:01.932 fused_ordering(222) 00:09:01.932 fused_ordering(223) 00:09:01.932 fused_ordering(224) 00:09:01.932 fused_ordering(225) 00:09:01.932 fused_ordering(226) 00:09:01.932 fused_ordering(227) 00:09:01.932 fused_ordering(228) 00:09:01.932 fused_ordering(229) 00:09:01.932 fused_ordering(230) 00:09:01.932 fused_ordering(231) 00:09:01.932 fused_ordering(232) 00:09:01.932 fused_ordering(233) 00:09:01.932 fused_ordering(234) 00:09:01.933 fused_ordering(235) 00:09:01.933 fused_ordering(236) 00:09:01.933 fused_ordering(237) 00:09:01.933 fused_ordering(238) 00:09:01.933 fused_ordering(239) 00:09:01.933 fused_ordering(240) 00:09:01.933 fused_ordering(241) 00:09:01.933 fused_ordering(242) 00:09:01.933 fused_ordering(243) 00:09:01.933 fused_ordering(244) 00:09:01.933 fused_ordering(245) 00:09:01.933 fused_ordering(246) 00:09:01.933 fused_ordering(247) 00:09:01.933 fused_ordering(248) 00:09:01.933 fused_ordering(249) 00:09:01.933 fused_ordering(250) 00:09:01.933 fused_ordering(251) 00:09:01.933 fused_ordering(252) 00:09:01.933 fused_ordering(253) 00:09:01.933 fused_ordering(254) 00:09:01.933 fused_ordering(255) 00:09:01.933 fused_ordering(256) 00:09:01.933 fused_ordering(257) 00:09:01.933 fused_ordering(258) 00:09:01.933 fused_ordering(259) 00:09:01.933 fused_ordering(260) 00:09:01.933 fused_ordering(261) 00:09:01.933 fused_ordering(262) 00:09:01.933 fused_ordering(263) 00:09:01.933 fused_ordering(264) 00:09:01.933 fused_ordering(265) 00:09:01.933 fused_ordering(266) 00:09:01.933 fused_ordering(267) 00:09:01.933 fused_ordering(268) 00:09:01.933 fused_ordering(269) 00:09:01.933 fused_ordering(270) 00:09:01.933 fused_ordering(271) 00:09:01.933 fused_ordering(272) 00:09:01.933 fused_ordering(273) 00:09:01.933 fused_ordering(274) 00:09:01.933 fused_ordering(275) 00:09:01.933 fused_ordering(276) 00:09:01.933 fused_ordering(277) 00:09:01.933 fused_ordering(278) 00:09:01.933 fused_ordering(279) 00:09:01.933 fused_ordering(280) 00:09:01.933 fused_ordering(281) 00:09:01.933 fused_ordering(282) 00:09:01.933 fused_ordering(283) 00:09:01.933 fused_ordering(284) 00:09:01.933 fused_ordering(285) 00:09:01.933 fused_ordering(286) 00:09:01.933 fused_ordering(287) 00:09:01.933 fused_ordering(288) 00:09:01.933 fused_ordering(289) 00:09:01.933 fused_ordering(290) 00:09:01.933 fused_ordering(291) 00:09:01.933 fused_ordering(292) 00:09:01.933 fused_ordering(293) 00:09:01.933 fused_ordering(294) 00:09:01.933 fused_ordering(295) 00:09:01.933 fused_ordering(296) 00:09:01.933 fused_ordering(297) 00:09:01.933 fused_ordering(298) 00:09:01.933 fused_ordering(299) 00:09:01.933 fused_ordering(300) 00:09:01.933 fused_ordering(301) 00:09:01.933 fused_ordering(302) 00:09:01.933 fused_ordering(303) 00:09:01.933 fused_ordering(304) 00:09:01.933 fused_ordering(305) 00:09:01.933 fused_ordering(306) 00:09:01.933 fused_ordering(307) 00:09:01.933 fused_ordering(308) 00:09:01.933 fused_ordering(309) 00:09:01.933 fused_ordering(310) 00:09:01.933 fused_ordering(311) 00:09:01.933 fused_ordering(312) 00:09:01.933 fused_ordering(313) 00:09:01.933 fused_ordering(314) 00:09:01.933 fused_ordering(315) 00:09:01.933 fused_ordering(316) 00:09:01.933 fused_ordering(317) 00:09:01.933 fused_ordering(318) 00:09:01.933 fused_ordering(319) 00:09:01.933 fused_ordering(320) 00:09:01.933 fused_ordering(321) 00:09:01.933 fused_ordering(322) 00:09:01.933 fused_ordering(323) 00:09:01.933 fused_ordering(324) 00:09:01.933 fused_ordering(325) 00:09:01.933 fused_ordering(326) 00:09:01.933 fused_ordering(327) 00:09:01.933 fused_ordering(328) 00:09:01.933 fused_ordering(329) 00:09:01.933 fused_ordering(330) 00:09:01.933 fused_ordering(331) 00:09:01.933 fused_ordering(332) 00:09:01.933 fused_ordering(333) 00:09:01.933 fused_ordering(334) 00:09:01.933 fused_ordering(335) 00:09:01.933 fused_ordering(336) 00:09:01.933 fused_ordering(337) 00:09:01.933 fused_ordering(338) 00:09:01.933 fused_ordering(339) 00:09:01.933 fused_ordering(340) 00:09:01.933 fused_ordering(341) 00:09:01.933 fused_ordering(342) 00:09:01.933 fused_ordering(343) 00:09:01.933 fused_ordering(344) 00:09:01.933 fused_ordering(345) 00:09:01.933 fused_ordering(346) 00:09:01.933 fused_ordering(347) 00:09:01.933 fused_ordering(348) 00:09:01.933 fused_ordering(349) 00:09:01.933 fused_ordering(350) 00:09:01.933 fused_ordering(351) 00:09:01.933 fused_ordering(352) 00:09:01.933 fused_ordering(353) 00:09:01.933 fused_ordering(354) 00:09:01.933 fused_ordering(355) 00:09:01.933 fused_ordering(356) 00:09:01.933 fused_ordering(357) 00:09:01.933 fused_ordering(358) 00:09:01.933 fused_ordering(359) 00:09:01.933 fused_ordering(360) 00:09:01.933 fused_ordering(361) 00:09:01.933 fused_ordering(362) 00:09:01.933 fused_ordering(363) 00:09:01.933 fused_ordering(364) 00:09:01.933 fused_ordering(365) 00:09:01.933 fused_ordering(366) 00:09:01.933 fused_ordering(367) 00:09:01.933 fused_ordering(368) 00:09:01.933 fused_ordering(369) 00:09:01.933 fused_ordering(370) 00:09:01.933 fused_ordering(371) 00:09:01.933 fused_ordering(372) 00:09:01.933 fused_ordering(373) 00:09:01.933 fused_ordering(374) 00:09:01.933 fused_ordering(375) 00:09:01.933 fused_ordering(376) 00:09:01.933 fused_ordering(377) 00:09:01.933 fused_ordering(378) 00:09:01.933 fused_ordering(379) 00:09:01.933 fused_ordering(380) 00:09:01.933 fused_ordering(381) 00:09:01.933 fused_ordering(382) 00:09:01.933 fused_ordering(383) 00:09:01.933 fused_ordering(384) 00:09:01.933 fused_ordering(385) 00:09:01.933 fused_ordering(386) 00:09:01.933 fused_ordering(387) 00:09:01.933 fused_ordering(388) 00:09:01.933 fused_ordering(389) 00:09:01.933 fused_ordering(390) 00:09:01.933 fused_ordering(391) 00:09:01.933 fused_ordering(392) 00:09:01.933 fused_ordering(393) 00:09:01.933 fused_ordering(394) 00:09:01.933 fused_ordering(395) 00:09:01.933 fused_ordering(396) 00:09:01.934 fused_ordering(397) 00:09:01.934 fused_ordering(398) 00:09:01.934 fused_ordering(399) 00:09:01.934 fused_ordering(400) 00:09:01.934 fused_ordering(401) 00:09:01.934 fused_ordering(402) 00:09:01.934 fused_ordering(403) 00:09:01.934 fused_ordering(404) 00:09:01.934 fused_ordering(405) 00:09:01.934 fused_ordering(406) 00:09:01.934 fused_ordering(407) 00:09:01.934 fused_ordering(408) 00:09:01.934 fused_ordering(409) 00:09:01.934 fused_ordering(410) 00:09:02.501 fused_ordering(411) 00:09:02.501 fused_ordering(412) 00:09:02.501 fused_ordering(413) 00:09:02.501 fused_ordering(414) 00:09:02.501 fused_ordering(415) 00:09:02.501 fused_ordering(416) 00:09:02.501 fused_ordering(417) 00:09:02.501 fused_ordering(418) 00:09:02.501 fused_ordering(419) 00:09:02.501 fused_ordering(420) 00:09:02.501 fused_ordering(421) 00:09:02.501 fused_ordering(422) 00:09:02.501 fused_ordering(423) 00:09:02.501 fused_ordering(424) 00:09:02.501 fused_ordering(425) 00:09:02.501 fused_ordering(426) 00:09:02.501 fused_ordering(427) 00:09:02.501 fused_ordering(428) 00:09:02.501 fused_ordering(429) 00:09:02.501 fused_ordering(430) 00:09:02.501 fused_ordering(431) 00:09:02.501 fused_ordering(432) 00:09:02.501 fused_ordering(433) 00:09:02.501 fused_ordering(434) 00:09:02.501 fused_ordering(435) 00:09:02.501 fused_ordering(436) 00:09:02.501 fused_ordering(437) 00:09:02.501 fused_ordering(438) 00:09:02.501 fused_ordering(439) 00:09:02.501 fused_ordering(440) 00:09:02.501 fused_ordering(441) 00:09:02.501 fused_ordering(442) 00:09:02.501 fused_ordering(443) 00:09:02.501 fused_ordering(444) 00:09:02.501 fused_ordering(445) 00:09:02.501 fused_ordering(446) 00:09:02.501 fused_ordering(447) 00:09:02.501 fused_ordering(448) 00:09:02.501 fused_ordering(449) 00:09:02.501 fused_ordering(450) 00:09:02.501 fused_ordering(451) 00:09:02.501 fused_ordering(452) 00:09:02.501 fused_ordering(453) 00:09:02.501 fused_ordering(454) 00:09:02.501 fused_ordering(455) 00:09:02.501 fused_ordering(456) 00:09:02.501 fused_ordering(457) 00:09:02.501 fused_ordering(458) 00:09:02.501 fused_ordering(459) 00:09:02.501 fused_ordering(460) 00:09:02.501 fused_ordering(461) 00:09:02.501 fused_ordering(462) 00:09:02.501 fused_ordering(463) 00:09:02.501 fused_ordering(464) 00:09:02.501 fused_ordering(465) 00:09:02.501 fused_ordering(466) 00:09:02.501 fused_ordering(467) 00:09:02.501 fused_ordering(468) 00:09:02.501 fused_ordering(469) 00:09:02.501 fused_ordering(470) 00:09:02.501 fused_ordering(471) 00:09:02.501 fused_ordering(472) 00:09:02.501 fused_ordering(473) 00:09:02.501 fused_ordering(474) 00:09:02.501 fused_ordering(475) 00:09:02.501 fused_ordering(476) 00:09:02.501 fused_ordering(477) 00:09:02.501 fused_ordering(478) 00:09:02.501 fused_ordering(479) 00:09:02.501 fused_ordering(480) 00:09:02.501 fused_ordering(481) 00:09:02.501 fused_ordering(482) 00:09:02.501 fused_ordering(483) 00:09:02.501 fused_ordering(484) 00:09:02.501 fused_ordering(485) 00:09:02.501 fused_ordering(486) 00:09:02.501 fused_ordering(487) 00:09:02.501 fused_ordering(488) 00:09:02.501 fused_ordering(489) 00:09:02.501 fused_ordering(490) 00:09:02.501 fused_ordering(491) 00:09:02.501 fused_ordering(492) 00:09:02.501 fused_ordering(493) 00:09:02.501 fused_ordering(494) 00:09:02.501 fused_ordering(495) 00:09:02.501 fused_ordering(496) 00:09:02.501 fused_ordering(497) 00:09:02.501 fused_ordering(498) 00:09:02.501 fused_ordering(499) 00:09:02.501 fused_ordering(500) 00:09:02.501 fused_ordering(501) 00:09:02.501 fused_ordering(502) 00:09:02.501 fused_ordering(503) 00:09:02.501 fused_ordering(504) 00:09:02.501 fused_ordering(505) 00:09:02.501 fused_ordering(506) 00:09:02.501 fused_ordering(507) 00:09:02.501 fused_ordering(508) 00:09:02.501 fused_ordering(509) 00:09:02.501 fused_ordering(510) 00:09:02.501 fused_ordering(511) 00:09:02.501 fused_ordering(512) 00:09:02.501 fused_ordering(513) 00:09:02.501 fused_ordering(514) 00:09:02.501 fused_ordering(515) 00:09:02.501 fused_ordering(516) 00:09:02.501 fused_ordering(517) 00:09:02.501 fused_ordering(518) 00:09:02.501 fused_ordering(519) 00:09:02.501 fused_ordering(520) 00:09:02.501 fused_ordering(521) 00:09:02.501 fused_ordering(522) 00:09:02.501 fused_ordering(523) 00:09:02.501 fused_ordering(524) 00:09:02.501 fused_ordering(525) 00:09:02.501 fused_ordering(526) 00:09:02.501 fused_ordering(527) 00:09:02.501 fused_ordering(528) 00:09:02.501 fused_ordering(529) 00:09:02.501 fused_ordering(530) 00:09:02.501 fused_ordering(531) 00:09:02.501 fused_ordering(532) 00:09:02.501 fused_ordering(533) 00:09:02.501 fused_ordering(534) 00:09:02.501 fused_ordering(535) 00:09:02.501 fused_ordering(536) 00:09:02.501 fused_ordering(537) 00:09:02.501 fused_ordering(538) 00:09:02.501 fused_ordering(539) 00:09:02.501 fused_ordering(540) 00:09:02.501 fused_ordering(541) 00:09:02.501 fused_ordering(542) 00:09:02.501 fused_ordering(543) 00:09:02.501 fused_ordering(544) 00:09:02.501 fused_ordering(545) 00:09:02.501 fused_ordering(546) 00:09:02.501 fused_ordering(547) 00:09:02.501 fused_ordering(548) 00:09:02.501 fused_ordering(549) 00:09:02.501 fused_ordering(550) 00:09:02.501 fused_ordering(551) 00:09:02.501 fused_ordering(552) 00:09:02.501 fused_ordering(553) 00:09:02.501 fused_ordering(554) 00:09:02.501 fused_ordering(555) 00:09:02.501 fused_ordering(556) 00:09:02.501 fused_ordering(557) 00:09:02.501 fused_ordering(558) 00:09:02.501 fused_ordering(559) 00:09:02.501 fused_ordering(560) 00:09:02.501 fused_ordering(561) 00:09:02.501 fused_ordering(562) 00:09:02.501 fused_ordering(563) 00:09:02.501 fused_ordering(564) 00:09:02.501 fused_ordering(565) 00:09:02.501 fused_ordering(566) 00:09:02.501 fused_ordering(567) 00:09:02.501 fused_ordering(568) 00:09:02.501 fused_ordering(569) 00:09:02.501 fused_ordering(570) 00:09:02.501 fused_ordering(571) 00:09:02.501 fused_ordering(572) 00:09:02.501 fused_ordering(573) 00:09:02.501 fused_ordering(574) 00:09:02.501 fused_ordering(575) 00:09:02.501 fused_ordering(576) 00:09:02.501 fused_ordering(577) 00:09:02.501 fused_ordering(578) 00:09:02.501 fused_ordering(579) 00:09:02.501 fused_ordering(580) 00:09:02.501 fused_ordering(581) 00:09:02.501 fused_ordering(582) 00:09:02.501 fused_ordering(583) 00:09:02.501 fused_ordering(584) 00:09:02.501 fused_ordering(585) 00:09:02.501 fused_ordering(586) 00:09:02.501 fused_ordering(587) 00:09:02.501 fused_ordering(588) 00:09:02.501 fused_ordering(589) 00:09:02.501 fused_ordering(590) 00:09:02.501 fused_ordering(591) 00:09:02.501 fused_ordering(592) 00:09:02.501 fused_ordering(593) 00:09:02.501 fused_ordering(594) 00:09:02.501 fused_ordering(595) 00:09:02.501 fused_ordering(596) 00:09:02.501 fused_ordering(597) 00:09:02.501 fused_ordering(598) 00:09:02.501 fused_ordering(599) 00:09:02.501 fused_ordering(600) 00:09:02.501 fused_ordering(601) 00:09:02.501 fused_ordering(602) 00:09:02.501 fused_ordering(603) 00:09:02.502 fused_ordering(604) 00:09:02.502 fused_ordering(605) 00:09:02.502 fused_ordering(606) 00:09:02.502 fused_ordering(607) 00:09:02.502 fused_ordering(608) 00:09:02.502 fused_ordering(609) 00:09:02.502 fused_ordering(610) 00:09:02.502 fused_ordering(611) 00:09:02.502 fused_ordering(612) 00:09:02.502 fused_ordering(613) 00:09:02.502 fused_ordering(614) 00:09:02.502 fused_ordering(615) 00:09:03.070 fused_ordering(616) 00:09:03.070 fused_ordering(617) 00:09:03.070 fused_ordering(618) 00:09:03.070 fused_ordering(619) 00:09:03.070 fused_ordering(620) 00:09:03.070 fused_ordering(621) 00:09:03.070 fused_ordering(622) 00:09:03.070 fused_ordering(623) 00:09:03.070 fused_ordering(624) 00:09:03.070 fused_ordering(625) 00:09:03.070 fused_ordering(626) 00:09:03.070 fused_ordering(627) 00:09:03.070 fused_ordering(628) 00:09:03.070 fused_ordering(629) 00:09:03.070 fused_ordering(630) 00:09:03.070 fused_ordering(631) 00:09:03.070 fused_ordering(632) 00:09:03.070 fused_ordering(633) 00:09:03.070 fused_ordering(634) 00:09:03.070 fused_ordering(635) 00:09:03.070 fused_ordering(636) 00:09:03.070 fused_ordering(637) 00:09:03.070 fused_ordering(638) 00:09:03.070 fused_ordering(639) 00:09:03.070 fused_ordering(640) 00:09:03.070 fused_ordering(641) 00:09:03.070 fused_ordering(642) 00:09:03.070 fused_ordering(643) 00:09:03.070 fused_ordering(644) 00:09:03.070 fused_ordering(645) 00:09:03.070 fused_ordering(646) 00:09:03.070 fused_ordering(647) 00:09:03.070 fused_ordering(648) 00:09:03.070 fused_ordering(649) 00:09:03.070 fused_ordering(650) 00:09:03.070 fused_ordering(651) 00:09:03.070 fused_ordering(652) 00:09:03.070 fused_ordering(653) 00:09:03.070 fused_ordering(654) 00:09:03.070 fused_ordering(655) 00:09:03.070 fused_ordering(656) 00:09:03.070 fused_ordering(657) 00:09:03.070 fused_ordering(658) 00:09:03.070 fused_ordering(659) 00:09:03.070 fused_ordering(660) 00:09:03.070 fused_ordering(661) 00:09:03.070 fused_ordering(662) 00:09:03.070 fused_ordering(663) 00:09:03.070 fused_ordering(664) 00:09:03.070 fused_ordering(665) 00:09:03.070 fused_ordering(666) 00:09:03.070 fused_ordering(667) 00:09:03.070 fused_ordering(668) 00:09:03.070 fused_ordering(669) 00:09:03.070 fused_ordering(670) 00:09:03.070 fused_ordering(671) 00:09:03.070 fused_ordering(672) 00:09:03.070 fused_ordering(673) 00:09:03.070 fused_ordering(674) 00:09:03.070 fused_ordering(675) 00:09:03.070 fused_ordering(676) 00:09:03.070 fused_ordering(677) 00:09:03.070 fused_ordering(678) 00:09:03.070 fused_ordering(679) 00:09:03.070 fused_ordering(680) 00:09:03.070 fused_ordering(681) 00:09:03.070 fused_ordering(682) 00:09:03.070 fused_ordering(683) 00:09:03.070 fused_ordering(684) 00:09:03.070 fused_ordering(685) 00:09:03.070 fused_ordering(686) 00:09:03.070 fused_ordering(687) 00:09:03.070 fused_ordering(688) 00:09:03.070 fused_ordering(689) 00:09:03.070 fused_ordering(690) 00:09:03.070 fused_ordering(691) 00:09:03.070 fused_ordering(692) 00:09:03.070 fused_ordering(693) 00:09:03.070 fused_ordering(694) 00:09:03.070 fused_ordering(695) 00:09:03.070 fused_ordering(696) 00:09:03.070 fused_ordering(697) 00:09:03.070 fused_ordering(698) 00:09:03.070 fused_ordering(699) 00:09:03.070 fused_ordering(700) 00:09:03.070 fused_ordering(701) 00:09:03.070 fused_ordering(702) 00:09:03.070 fused_ordering(703) 00:09:03.070 fused_ordering(704) 00:09:03.070 fused_ordering(705) 00:09:03.070 fused_ordering(706) 00:09:03.070 fused_ordering(707) 00:09:03.070 fused_ordering(708) 00:09:03.070 fused_ordering(709) 00:09:03.070 fused_ordering(710) 00:09:03.070 fused_ordering(711) 00:09:03.070 fused_ordering(712) 00:09:03.070 fused_ordering(713) 00:09:03.070 fused_ordering(714) 00:09:03.070 fused_ordering(715) 00:09:03.070 fused_ordering(716) 00:09:03.070 fused_ordering(717) 00:09:03.070 fused_ordering(718) 00:09:03.070 fused_ordering(719) 00:09:03.070 fused_ordering(720) 00:09:03.070 fused_ordering(721) 00:09:03.070 fused_ordering(722) 00:09:03.070 fused_ordering(723) 00:09:03.070 fused_ordering(724) 00:09:03.070 fused_ordering(725) 00:09:03.070 fused_ordering(726) 00:09:03.070 fused_ordering(727) 00:09:03.070 fused_ordering(728) 00:09:03.070 fused_ordering(729) 00:09:03.070 fused_ordering(730) 00:09:03.070 fused_ordering(731) 00:09:03.070 fused_ordering(732) 00:09:03.070 fused_ordering(733) 00:09:03.070 fused_ordering(734) 00:09:03.070 fused_ordering(735) 00:09:03.070 fused_ordering(736) 00:09:03.070 fused_ordering(737) 00:09:03.070 fused_ordering(738) 00:09:03.070 fused_ordering(739) 00:09:03.070 fused_ordering(740) 00:09:03.070 fused_ordering(741) 00:09:03.070 fused_ordering(742) 00:09:03.070 fused_ordering(743) 00:09:03.070 fused_ordering(744) 00:09:03.070 fused_ordering(745) 00:09:03.070 fused_ordering(746) 00:09:03.070 fused_ordering(747) 00:09:03.071 fused_ordering(748) 00:09:03.071 fused_ordering(749) 00:09:03.071 fused_ordering(750) 00:09:03.071 fused_ordering(751) 00:09:03.071 fused_ordering(752) 00:09:03.071 fused_ordering(753) 00:09:03.071 fused_ordering(754) 00:09:03.071 fused_ordering(755) 00:09:03.071 fused_ordering(756) 00:09:03.071 fused_ordering(757) 00:09:03.071 fused_ordering(758) 00:09:03.071 fused_ordering(759) 00:09:03.071 fused_ordering(760) 00:09:03.071 fused_ordering(761) 00:09:03.071 fused_ordering(762) 00:09:03.071 fused_ordering(763) 00:09:03.071 fused_ordering(764) 00:09:03.071 fused_ordering(765) 00:09:03.071 fused_ordering(766) 00:09:03.071 fused_ordering(767) 00:09:03.071 fused_ordering(768) 00:09:03.071 fused_ordering(769) 00:09:03.071 fused_ordering(770) 00:09:03.071 fused_ordering(771) 00:09:03.071 fused_ordering(772) 00:09:03.071 fused_ordering(773) 00:09:03.071 fused_ordering(774) 00:09:03.071 fused_ordering(775) 00:09:03.071 fused_ordering(776) 00:09:03.071 fused_ordering(777) 00:09:03.071 fused_ordering(778) 00:09:03.071 fused_ordering(779) 00:09:03.071 fused_ordering(780) 00:09:03.071 fused_ordering(781) 00:09:03.071 fused_ordering(782) 00:09:03.071 fused_ordering(783) 00:09:03.071 fused_ordering(784) 00:09:03.071 fused_ordering(785) 00:09:03.071 fused_ordering(786) 00:09:03.071 fused_ordering(787) 00:09:03.071 fused_ordering(788) 00:09:03.071 fused_ordering(789) 00:09:03.071 fused_ordering(790) 00:09:03.071 fused_ordering(791) 00:09:03.071 fused_ordering(792) 00:09:03.071 fused_ordering(793) 00:09:03.071 fused_ordering(794) 00:09:03.071 fused_ordering(795) 00:09:03.071 fused_ordering(796) 00:09:03.071 fused_ordering(797) 00:09:03.071 fused_ordering(798) 00:09:03.071 fused_ordering(799) 00:09:03.071 fused_ordering(800) 00:09:03.071 fused_ordering(801) 00:09:03.071 fused_ordering(802) 00:09:03.071 fused_ordering(803) 00:09:03.071 fused_ordering(804) 00:09:03.071 fused_ordering(805) 00:09:03.071 fused_ordering(806) 00:09:03.071 fused_ordering(807) 00:09:03.071 fused_ordering(808) 00:09:03.071 fused_ordering(809) 00:09:03.071 fused_ordering(810) 00:09:03.071 fused_ordering(811) 00:09:03.071 fused_ordering(812) 00:09:03.071 fused_ordering(813) 00:09:03.071 fused_ordering(814) 00:09:03.071 fused_ordering(815) 00:09:03.071 fused_ordering(816) 00:09:03.071 fused_ordering(817) 00:09:03.071 fused_ordering(818) 00:09:03.071 fused_ordering(819) 00:09:03.071 fused_ordering(820) 00:09:03.635 fused_ordering(821) 00:09:03.635 fused_ordering(822) 00:09:03.635 fused_ordering(823) 00:09:03.635 fused_ordering(824) 00:09:03.635 fused_ordering(825) 00:09:03.635 fused_ordering(826) 00:09:03.635 fused_ordering(827) 00:09:03.635 fused_ordering(828) 00:09:03.635 fused_ordering(829) 00:09:03.635 fused_ordering(830) 00:09:03.635 fused_ordering(831) 00:09:03.635 fused_ordering(832) 00:09:03.635 fused_ordering(833) 00:09:03.635 fused_ordering(834) 00:09:03.635 fused_ordering(835) 00:09:03.635 fused_ordering(836) 00:09:03.635 fused_ordering(837) 00:09:03.635 fused_ordering(838) 00:09:03.635 fused_ordering(839) 00:09:03.635 fused_ordering(840) 00:09:03.635 fused_ordering(841) 00:09:03.635 fused_ordering(842) 00:09:03.635 fused_ordering(843) 00:09:03.635 fused_ordering(844) 00:09:03.635 fused_ordering(845) 00:09:03.635 fused_ordering(846) 00:09:03.635 fused_ordering(847) 00:09:03.635 fused_ordering(848) 00:09:03.635 fused_ordering(849) 00:09:03.635 fused_ordering(850) 00:09:03.635 fused_ordering(851) 00:09:03.635 fused_ordering(852) 00:09:03.635 fused_ordering(853) 00:09:03.635 fused_ordering(854) 00:09:03.635 fused_ordering(855) 00:09:03.635 fused_ordering(856) 00:09:03.635 fused_ordering(857) 00:09:03.635 fused_ordering(858) 00:09:03.635 fused_ordering(859) 00:09:03.635 fused_ordering(860) 00:09:03.635 fused_ordering(861) 00:09:03.635 fused_ordering(862) 00:09:03.635 fused_ordering(863) 00:09:03.635 fused_ordering(864) 00:09:03.635 fused_ordering(865) 00:09:03.635 fused_ordering(866) 00:09:03.635 fused_ordering(867) 00:09:03.635 fused_ordering(868) 00:09:03.635 fused_ordering(869) 00:09:03.635 fused_ordering(870) 00:09:03.635 fused_ordering(871) 00:09:03.635 fused_ordering(872) 00:09:03.635 fused_ordering(873) 00:09:03.635 fused_ordering(874) 00:09:03.635 fused_ordering(875) 00:09:03.635 fused_ordering(876) 00:09:03.635 fused_ordering(877) 00:09:03.635 fused_ordering(878) 00:09:03.635 fused_ordering(879) 00:09:03.635 fused_ordering(880) 00:09:03.635 fused_ordering(881) 00:09:03.635 fused_ordering(882) 00:09:03.635 fused_ordering(883) 00:09:03.635 fused_ordering(884) 00:09:03.635 fused_ordering(885) 00:09:03.635 fused_ordering(886) 00:09:03.635 fused_ordering(887) 00:09:03.635 fused_ordering(888) 00:09:03.635 fused_ordering(889) 00:09:03.635 fused_ordering(890) 00:09:03.635 fused_ordering(891) 00:09:03.635 fused_ordering(892) 00:09:03.635 fused_ordering(893) 00:09:03.635 fused_ordering(894) 00:09:03.635 fused_ordering(895) 00:09:03.635 fused_ordering(896) 00:09:03.635 fused_ordering(897) 00:09:03.635 fused_ordering(898) 00:09:03.635 fused_ordering(899) 00:09:03.635 fused_ordering(900) 00:09:03.636 fused_ordering(901) 00:09:03.636 fused_ordering(902) 00:09:03.636 fused_ordering(903) 00:09:03.636 fused_ordering(904) 00:09:03.636 fused_ordering(905) 00:09:03.636 fused_ordering(906) 00:09:03.636 fused_ordering(907) 00:09:03.636 fused_ordering(908) 00:09:03.636 fused_ordering(909) 00:09:03.636 fused_ordering(910) 00:09:03.636 fused_ordering(911) 00:09:03.636 fused_ordering(912) 00:09:03.636 fused_ordering(913) 00:09:03.636 fused_ordering(914) 00:09:03.636 fused_ordering(915) 00:09:03.636 fused_ordering(916) 00:09:03.636 fused_ordering(917) 00:09:03.636 fused_ordering(918) 00:09:03.636 fused_ordering(919) 00:09:03.636 fused_ordering(920) 00:09:03.636 fused_ordering(921) 00:09:03.636 fused_ordering(922) 00:09:03.636 fused_ordering(923) 00:09:03.636 fused_ordering(924) 00:09:03.636 fused_ordering(925) 00:09:03.636 fused_ordering(926) 00:09:03.636 fused_ordering(927) 00:09:03.636 fused_ordering(928) 00:09:03.636 fused_ordering(929) 00:09:03.636 fused_ordering(930) 00:09:03.636 fused_ordering(931) 00:09:03.636 fused_ordering(932) 00:09:03.636 fused_ordering(933) 00:09:03.636 fused_ordering(934) 00:09:03.636 fused_ordering(935) 00:09:03.636 fused_ordering(936) 00:09:03.636 fused_ordering(937) 00:09:03.636 fused_ordering(938) 00:09:03.636 fused_ordering(939) 00:09:03.636 fused_ordering(940) 00:09:03.636 fused_ordering(941) 00:09:03.636 fused_ordering(942) 00:09:03.636 fused_ordering(943) 00:09:03.636 fused_ordering(944) 00:09:03.636 fused_ordering(945) 00:09:03.636 fused_ordering(946) 00:09:03.636 fused_ordering(947) 00:09:03.636 fused_ordering(948) 00:09:03.636 fused_ordering(949) 00:09:03.636 fused_ordering(950) 00:09:03.636 fused_ordering(951) 00:09:03.636 fused_ordering(952) 00:09:03.636 fused_ordering(953) 00:09:03.636 fused_ordering(954) 00:09:03.636 fused_ordering(955) 00:09:03.636 fused_ordering(956) 00:09:03.636 fused_ordering(957) 00:09:03.636 fused_ordering(958) 00:09:03.636 fused_ordering(959) 00:09:03.636 fused_ordering(960) 00:09:03.636 fused_ordering(961) 00:09:03.636 fused_ordering(962) 00:09:03.636 fused_ordering(963) 00:09:03.636 fused_ordering(964) 00:09:03.636 fused_ordering(965) 00:09:03.636 fused_ordering(966) 00:09:03.636 fused_ordering(967) 00:09:03.636 fused_ordering(968) 00:09:03.636 fused_ordering(969) 00:09:03.636 fused_ordering(970) 00:09:03.636 fused_ordering(971) 00:09:03.636 fused_ordering(972) 00:09:03.636 fused_ordering(973) 00:09:03.636 fused_ordering(974) 00:09:03.636 fused_ordering(975) 00:09:03.636 fused_ordering(976) 00:09:03.636 fused_ordering(977) 00:09:03.636 fused_ordering(978) 00:09:03.636 fused_ordering(979) 00:09:03.636 fused_ordering(980) 00:09:03.636 fused_ordering(981) 00:09:03.636 fused_ordering(982) 00:09:03.636 fused_ordering(983) 00:09:03.636 fused_ordering(984) 00:09:03.636 fused_ordering(985) 00:09:03.636 fused_ordering(986) 00:09:03.636 fused_ordering(987) 00:09:03.636 fused_ordering(988) 00:09:03.636 fused_ordering(989) 00:09:03.636 fused_ordering(990) 00:09:03.636 fused_ordering(991) 00:09:03.636 fused_ordering(992) 00:09:03.636 fused_ordering(993) 00:09:03.636 fused_ordering(994) 00:09:03.636 fused_ordering(995) 00:09:03.636 fused_ordering(996) 00:09:03.636 fused_ordering(997) 00:09:03.636 fused_ordering(998) 00:09:03.636 fused_ordering(999) 00:09:03.636 fused_ordering(1000) 00:09:03.636 fused_ordering(1001) 00:09:03.636 fused_ordering(1002) 00:09:03.636 fused_ordering(1003) 00:09:03.636 fused_ordering(1004) 00:09:03.636 fused_ordering(1005) 00:09:03.636 fused_ordering(1006) 00:09:03.636 fused_ordering(1007) 00:09:03.636 fused_ordering(1008) 00:09:03.636 fused_ordering(1009) 00:09:03.636 fused_ordering(1010) 00:09:03.636 fused_ordering(1011) 00:09:03.636 fused_ordering(1012) 00:09:03.636 fused_ordering(1013) 00:09:03.636 fused_ordering(1014) 00:09:03.636 fused_ordering(1015) 00:09:03.636 fused_ordering(1016) 00:09:03.636 fused_ordering(1017) 00:09:03.636 fused_ordering(1018) 00:09:03.636 fused_ordering(1019) 00:09:03.636 fused_ordering(1020) 00:09:03.636 fused_ordering(1021) 00:09:03.636 fused_ordering(1022) 00:09:03.636 fused_ordering(1023) 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.636 rmmod nvme_tcp 00:09:03.636 rmmod nvme_fabrics 00:09:03.636 rmmod nvme_keyring 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3325014 ']' 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3325014 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3325014 ']' 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3325014 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:03.636 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3325014 00:09:03.894 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:03.895 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:03.895 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3325014' 00:09:03.895 killing process with pid 3325014 00:09:03.895 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3325014 00:09:03.895 12:48:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3325014 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.153 12:48:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.055 12:48:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.055 00:09:06.055 real 0m7.578s 00:09:06.055 user 0m4.739s 00:09:06.055 sys 0m3.577s 00:09:06.055 12:48:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:06.055 12:48:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:09:06.055 ************************************ 00:09:06.055 END TEST nvmf_fused_ordering 00:09:06.055 ************************************ 00:09:06.055 12:48:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:06.055 12:48:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:06.055 12:48:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:06.055 12:48:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.055 12:48:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.055 ************************************ 00:09:06.055 START TEST nvmf_delete_subsystem 00:09:06.055 ************************************ 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:06.055 * Looking for test storage... 00:09:06.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.055 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.313 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.314 12:48:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.212 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.470 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:08.471 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:08.471 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:08.471 Found net devices under 0000:84:00.0: cvl_0_0 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:08.471 Found net devices under 0000:84:00.1: cvl_0_1 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:09:08.471 00:09:08.471 --- 10.0.0.2 ping statistics --- 00:09:08.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.471 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:09:08.471 00:09:08.471 --- 10.0.0.1 ping statistics --- 00:09:08.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.471 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3327374 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3327374 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3327374 ']' 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.471 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.471 [2024-07-15 12:48:26.632351] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:08.471 [2024-07-15 12:48:26.632449] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.471 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.730 [2024-07-15 12:48:26.695865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.730 [2024-07-15 12:48:26.796668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.730 [2024-07-15 12:48:26.796743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.730 [2024-07-15 12:48:26.796761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.730 [2024-07-15 12:48:26.796772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.730 [2024-07-15 12:48:26.796791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.730 [2024-07-15 12:48:26.796869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.730 [2024-07-15 12:48:26.796874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.730 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.730 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:09:08.730 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.730 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.730 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.989 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 [2024-07-15 12:48:26.943834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 [2024-07-15 12:48:26.960069] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 NULL1 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 Delay0 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3327396 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:08.990 12:48:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:08.990 EAL: No free 2048 kB hugepages reported on node 1 00:09:08.990 [2024-07-15 12:48:27.034677] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:10.897 12:48:28 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:10.897 12:48:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.897 12:48:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 starting I/O failed: -6 00:09:11.156 Write completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Write completed with error (sct=0, sc=8) 00:09:11.156 starting I/O failed: -6 00:09:11.156 Write completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 Read completed with error (sct=0, sc=8) 00:09:11.156 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 [2024-07-15 12:48:29.205499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c555c0 is same with the state(5) to be set 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 starting I/O failed: -6 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 [2024-07-15 12:48:29.206340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e5400d450 is same with the state(5) to be set 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Write completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:11.157 Read completed with error (sct=0, sc=8) 00:09:12.094 [2024-07-15 12:48:30.172564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c56ac0 is same with the state(5) to be set 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Write completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.094 [2024-07-15 12:48:30.206240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c553e0 is same with the state(5) to be set 00:09:12.094 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 [2024-07-15 12:48:30.208046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c557a0 is same with the state(5) to be set 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 [2024-07-15 12:48:30.208551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e5400cfe0 is same with the state(5) to be set 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Write completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 Read completed with error (sct=0, sc=8) 00:09:12.095 [2024-07-15 12:48:30.208734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e5400d760 is same with the state(5) to be set 00:09:12.095 Initializing NVMe Controllers 00:09:12.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.095 Controller IO queue size 128, less than required. 00:09:12.095 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:12.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:12.095 Initialization complete. Launching workers. 00:09:12.095 ======================================================== 00:09:12.095 Latency(us) 00:09:12.095 Device Information : IOPS MiB/s Average min max 00:09:12.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 159.84 0.08 918721.32 618.73 1012088.94 00:09:12.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.32 0.08 947801.62 312.56 2001635.85 00:09:12.095 ======================================================== 00:09:12.095 Total : 322.16 0.16 933373.49 312.56 2001635.85 00:09:12.095 00:09:12.095 [2024-07-15 12:48:30.209636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c56ac0 (9): Bad file descriptor 00:09:12.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:12.095 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.095 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:12.095 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3327396 00:09:12.095 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:12.666 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:12.666 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3327396 00:09:12.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3327396) - No such process 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3327396 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3327396 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3327396 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.667 [2024-07-15 12:48:30.733598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3327916 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:12.667 12:48:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:12.667 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.667 [2024-07-15 12:48:30.798834] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:13.237 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:13.237 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:13.237 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:13.804 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:13.804 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:13.804 12:48:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.063 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.063 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:14.063 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:14.632 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:14.632 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:14.632 12:48:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.202 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.202 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:15.202 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:15.772 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:15.772 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:15.772 12:48:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:16.030 Initializing NVMe Controllers 00:09:16.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:16.030 Controller IO queue size 128, less than required. 00:09:16.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:16.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:16.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:16.030 Initialization complete. Launching workers. 00:09:16.030 ======================================================== 00:09:16.030 Latency(us) 00:09:16.030 Device Information : IOPS MiB/s Average min max 00:09:16.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004314.53 1000221.66 1041370.53 00:09:16.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004391.46 1000225.06 1042206.62 00:09:16.030 ======================================================== 00:09:16.030 Total : 256.00 0.12 1004352.99 1000221.66 1042206.62 00:09:16.030 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3327916 00:09:16.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3327916) - No such process 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3327916 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:16.288 rmmod nvme_tcp 00:09:16.288 rmmod nvme_fabrics 00:09:16.288 rmmod nvme_keyring 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3327374 ']' 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3327374 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3327374 ']' 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3327374 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3327374 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3327374' 00:09:16.288 killing process with pid 3327374 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3327374 00:09:16.288 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3327374 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:16.547 12:48:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.448 12:48:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.448 00:09:18.448 real 0m12.448s 00:09:18.448 user 0m27.826s 00:09:18.448 sys 0m3.227s 00:09:18.448 12:48:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:18.448 12:48:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.448 ************************************ 00:09:18.448 END TEST nvmf_delete_subsystem 00:09:18.448 ************************************ 00:09:18.706 12:48:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:18.706 12:48:36 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:18.706 12:48:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:18.706 12:48:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:18.706 12:48:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.706 ************************************ 00:09:18.706 START TEST nvmf_ns_masking 00:09:18.706 ************************************ 00:09:18.706 12:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:18.706 * Looking for test storage... 00:09:18.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0fa549ff-b439-4513-958e-eab5d530290b 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=08fefdef-4d9d-464c-a07a-cfbc1972a4a7 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0c41eecd-dcde-4c81-b367-1d339d7d3140 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.707 12:48:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.238 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:21.239 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:21.239 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:21.239 Found net devices under 0000:84:00.0: cvl_0_0 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:21.239 Found net devices under 0000:84:00.1: cvl_0_1 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.239 12:48:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:09:21.239 00:09:21.239 --- 10.0.0.2 ping statistics --- 00:09:21.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.239 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:09:21.239 00:09:21.239 --- 10.0.0.1 ping statistics --- 00:09:21.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.239 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3330286 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3330286 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3330286 ']' 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.239 [2024-07-15 12:48:39.104651] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:21.239 [2024-07-15 12:48:39.104757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.239 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.239 [2024-07-15 12:48:39.170022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.239 [2024-07-15 12:48:39.279995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.239 [2024-07-15 12:48:39.280081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.239 [2024-07-15 12:48:39.280105] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.239 [2024-07-15 12:48:39.280116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.239 [2024-07-15 12:48:39.280126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.239 [2024-07-15 12:48:39.280152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.239 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:21.497 [2024-07-15 12:48:39.682924] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.497 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:09:21.497 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:09:21.497 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:21.756 Malloc1 00:09:22.013 12:48:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:22.271 Malloc2 00:09:22.271 12:48:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.529 12:48:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:22.788 12:48:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.059 [2024-07-15 12:48:41.063180] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:23.059 12:48:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:09:23.059 12:48:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c41eecd-dcde-4c81-b367-1d339d7d3140 -a 10.0.0.2 -s 4420 -i 4 00:09:23.317 12:48:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.317 12:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:23.317 12:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.317 12:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:23.317 12:48:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:25.299 [ 0]:0x1 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=186653c7cdf449b5827e624033e9e673 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 186653c7cdf449b5827e624033e9e673 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.299 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:25.557 [ 0]:0x1 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=186653c7cdf449b5827e624033e9e673 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 186653c7cdf449b5827e624033e9e673 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:25.557 [ 1]:0x2 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:09:25.557 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.816 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.816 12:48:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.076 12:48:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c41eecd-dcde-4c81-b367-1d339d7d3140 -a 10.0.0.2 -s 4420 -i 4 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:09:26.334 12:48:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:28.872 [ 0]:0x2 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:28.872 [ 0]:0x1 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=186653c7cdf449b5827e624033e9e673 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 186653c7cdf449b5827e624033e9e673 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:28.872 12:48:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:28.872 [ 1]:0x2 00:09:28.872 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:28.872 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:28.872 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:28.872 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:28.872 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:29.131 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:29.389 [ 0]:0x2 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.389 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:29.649 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:09:29.649 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0c41eecd-dcde-4c81-b367-1d339d7d3140 -a 10.0.0.2 -s 4420 -i 4 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:29.910 12:48:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:09:31.816 12:48:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:32.073 [ 0]:0x1 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=186653c7cdf449b5827e624033e9e673 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 186653c7cdf449b5827e624033e9e673 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:32.073 [ 1]:0x2 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:32.073 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:32.074 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:32.074 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.074 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.332 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:32.589 [ 0]:0x2 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:32.589 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:32.847 [2024-07-15 12:48:50.916503] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:32.847 request: 00:09:32.847 { 00:09:32.847 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:32.847 "nsid": 2, 00:09:32.847 "host": "nqn.2016-06.io.spdk:host1", 00:09:32.847 "method": "nvmf_ns_remove_host", 00:09:32.847 "req_id": 1 00:09:32.847 } 00:09:32.847 Got JSON-RPC error response 00:09:32.847 response: 00:09:32.847 { 00:09:32.847 "code": -32602, 00:09:32.847 "message": "Invalid parameters" 00:09:32.847 } 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:09:32.847 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:09:32.848 12:48:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:09:32.848 [ 0]:0x2 00:09:32.848 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:32.848 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e55ffc090f3f43bc8806183de8ee3ec7 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e55ffc090f3f43bc8806183de8ee3ec7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3331911 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3331911 /var/tmp/host.sock 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3331911 ']' 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:33.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.106 12:48:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:33.106 [2024-07-15 12:48:51.249446] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:33.106 [2024-07-15 12:48:51.249539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331911 ] 00:09:33.106 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.106 [2024-07-15 12:48:51.310549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.366 [2024-07-15 12:48:51.420781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.302 12:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:34.302 12:48:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:09:34.302 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.302 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:34.560 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0fa549ff-b439-4513-958e-eab5d530290b 00:09:34.561 12:48:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:34.561 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FA549FFB4394513958EEAB5D530290B -i 00:09:34.818 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 08fefdef-4d9d-464c-a07a-cfbc1972a4a7 00:09:34.818 12:48:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:09:34.818 12:48:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 08FEFDEF4D9D464CA07ACFBC1972A4A7 -i 00:09:35.076 12:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:35.333 12:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:09:35.589 12:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:35.590 12:48:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:09:36.156 nvme0n1 00:09:36.156 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:36.156 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:09:36.414 nvme1n2 00:09:36.414 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:09:36.414 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:09:36.414 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:09:36.414 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:09:36.414 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:09:36.671 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:09:36.671 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:09:36.671 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:09:36.671 12:48:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:09:36.929 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0fa549ff-b439-4513-958e-eab5d530290b == \0\f\a\5\4\9\f\f\-\b\4\3\9\-\4\5\1\3\-\9\5\8\e\-\e\a\b\5\d\5\3\0\2\9\0\b ]] 00:09:36.929 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:09:36.929 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:09:36.929 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:09:37.188 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 08fefdef-4d9d-464c-a07a-cfbc1972a4a7 == \0\8\f\e\f\d\e\f\-\4\d\9\d\-\4\6\4\c\-\a\0\7\a\-\c\f\b\c\1\9\7\2\a\4\a\7 ]] 00:09:37.188 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3331911 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3331911 ']' 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3331911 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3331911 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3331911' 00:09:37.211 killing process with pid 3331911 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3331911 00:09:37.211 12:48:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3331911 00:09:37.778 12:48:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.038 rmmod nvme_tcp 00:09:38.038 rmmod nvme_fabrics 00:09:38.038 rmmod nvme_keyring 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3330286 ']' 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3330286 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3330286 ']' 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3330286 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330286 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330286' 00:09:38.038 killing process with pid 3330286 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3330286 00:09:38.038 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3330286 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:38.297 12:48:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.837 12:48:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.837 00:09:40.837 real 0m21.803s 00:09:40.837 user 0m28.744s 00:09:40.837 sys 0m4.235s 00:09:40.837 12:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:40.837 12:48:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 ************************************ 00:09:40.837 END TEST nvmf_ns_masking 00:09:40.837 ************************************ 00:09:40.837 12:48:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:40.837 12:48:58 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:40.837 12:48:58 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:40.837 12:48:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:40.837 12:48:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.837 12:48:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:40.837 ************************************ 00:09:40.837 START TEST nvmf_nvme_cli 00:09:40.837 ************************************ 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:40.837 * Looking for test storage... 00:09:40.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:40.837 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:40.838 12:48:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:42.737 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:42.737 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.737 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:42.738 Found net devices under 0000:84:00.0: cvl_0_0 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:42.738 Found net devices under 0000:84:00.1: cvl_0_1 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:09:42.738 00:09:42.738 --- 10.0.0.2 ping statistics --- 00:09:42.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.738 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:42.738 00:09:42.738 --- 10.0.0.1 ping statistics --- 00:09:42.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.738 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3334433 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3334433 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3334433 ']' 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:42.738 12:49:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:42.738 [2024-07-15 12:49:00.888950] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:42.738 [2024-07-15 12:49:00.889031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.738 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.995 [2024-07-15 12:49:00.953079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.995 [2024-07-15 12:49:01.055905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.995 [2024-07-15 12:49:01.055962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.995 [2024-07-15 12:49:01.055982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.995 [2024-07-15 12:49:01.055998] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.995 [2024-07-15 12:49:01.056013] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.995 [2024-07-15 12:49:01.056097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.995 [2024-07-15 12:49:01.056172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.995 [2024-07-15 12:49:01.056240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.995 [2024-07-15 12:49:01.056234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.995 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:42.995 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:09:42.995 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:42.995 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:42.995 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 [2024-07-15 12:49:01.209638] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 Malloc0 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 Malloc1 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 [2024-07-15 12:49:01.292349] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.253 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:09:43.511 00:09:43.511 Discovery Log Number of Records 2, Generation counter 2 00:09:43.511 =====Discovery Log Entry 0====== 00:09:43.511 trtype: tcp 00:09:43.511 adrfam: ipv4 00:09:43.511 subtype: current discovery subsystem 00:09:43.511 treq: not required 00:09:43.511 portid: 0 00:09:43.511 trsvcid: 4420 00:09:43.511 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:43.511 traddr: 10.0.0.2 00:09:43.511 eflags: explicit discovery connections, duplicate discovery information 00:09:43.511 sectype: none 00:09:43.511 =====Discovery Log Entry 1====== 00:09:43.511 trtype: tcp 00:09:43.511 adrfam: ipv4 00:09:43.511 subtype: nvme subsystem 00:09:43.511 treq: not required 00:09:43.511 portid: 0 00:09:43.511 trsvcid: 4420 00:09:43.511 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:43.511 traddr: 10.0.0.2 00:09:43.511 eflags: none 00:09:43.511 sectype: none 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:43.511 12:49:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:09:44.093 12:49:02 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:09:45.992 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:45.992 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:45.992 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:46.250 /dev/nvme0n1 ]] 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.250 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:46.508 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.768 rmmod nvme_tcp 00:09:46.768 rmmod nvme_fabrics 00:09:46.768 rmmod nvme_keyring 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3334433 ']' 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3334433 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3334433 ']' 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3334433 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3334433 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3334433' 00:09:46.768 killing process with pid 3334433 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3334433 00:09:46.768 12:49:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3334433 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.028 12:49:05 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.595 12:49:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.595 00:09:49.595 real 0m8.638s 00:09:49.595 user 0m16.635s 00:09:49.595 sys 0m2.253s 00:09:49.595 12:49:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.595 12:49:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:49.595 ************************************ 00:09:49.595 END TEST nvmf_nvme_cli 00:09:49.595 ************************************ 00:09:49.595 12:49:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:49.595 12:49:07 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:49.595 12:49:07 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.595 12:49:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:49.595 12:49:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.596 12:49:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.596 ************************************ 00:09:49.596 START TEST nvmf_vfio_user 00:09:49.596 ************************************ 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:49.596 * Looking for test storage... 00:09:49.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3335376 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3335376' 00:09:49.596 Process pid: 3335376 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3335376 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3335376 ']' 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:49.596 [2024-07-15 12:49:07.355541] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:49.596 [2024-07-15 12:49:07.355645] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.596 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.596 [2024-07-15 12:49:07.416677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:49.596 [2024-07-15 12:49:07.532280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.596 [2024-07-15 12:49:07.532344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.596 [2024-07-15 12:49:07.532365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.596 [2024-07-15 12:49:07.532383] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.596 [2024-07-15 12:49:07.532398] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.596 [2024-07-15 12:49:07.532489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.596 [2024-07-15 12:49:07.532556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.596 [2024-07-15 12:49:07.532590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.596 [2024-07-15 12:49:07.532595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:09:49.596 12:49:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:50.531 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:50.790 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:50.790 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:50.790 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:50.790 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:50.790 12:49:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:51.357 Malloc1 00:09:51.357 12:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:51.357 12:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:51.616 12:49:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:51.873 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:51.873 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:51.873 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:52.130 Malloc2 00:09:52.130 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:52.387 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:52.644 12:49:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:52.903 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:52.903 [2024-07-15 12:49:11.071056] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:09:52.903 [2024-07-15 12:49:11.071103] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3335804 ] 00:09:52.903 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.903 [2024-07-15 12:49:11.102952] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:52.903 [2024-07-15 12:49:11.108396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:52.903 [2024-07-15 12:49:11.108423] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f19e729a000 00:09:52.903 [2024-07-15 12:49:11.109395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:52.903 [2024-07-15 12:49:11.110405] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.111411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.112415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.113420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.114425] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.115426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.116431] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:53.163 [2024-07-15 12:49:11.117438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:53.163 [2024-07-15 12:49:11.117458] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f19e728f000 00:09:53.163 [2024-07-15 12:49:11.118572] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:53.163 [2024-07-15 12:49:11.134316] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:53.163 [2024-07-15 12:49:11.134355] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:53.163 [2024-07-15 12:49:11.136552] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:53.163 [2024-07-15 12:49:11.136605] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:53.163 [2024-07-15 12:49:11.136694] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:53.163 [2024-07-15 12:49:11.136745] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:53.163 [2024-07-15 12:49:11.136759] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:53.163 [2024-07-15 12:49:11.137538] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:53.163 [2024-07-15 12:49:11.137557] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:53.163 [2024-07-15 12:49:11.137569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:53.163 [2024-07-15 12:49:11.138541] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:53.163 [2024-07-15 12:49:11.138561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:53.163 [2024-07-15 12:49:11.138574] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.139543] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:53.163 [2024-07-15 12:49:11.139562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.140553] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:53.163 [2024-07-15 12:49:11.140572] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:53.163 [2024-07-15 12:49:11.140581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.140592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.140701] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:53.163 [2024-07-15 12:49:11.140708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.140731] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:53.163 [2024-07-15 12:49:11.144748] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:53.163 [2024-07-15 12:49:11.145577] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:53.163 [2024-07-15 12:49:11.146584] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:53.163 [2024-07-15 12:49:11.147580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.163 [2024-07-15 12:49:11.147683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:53.163 [2024-07-15 12:49:11.148595] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:53.163 [2024-07-15 12:49:11.148617] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:53.163 [2024-07-15 12:49:11.148627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.148651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:53.163 [2024-07-15 12:49:11.148670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.148696] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.163 [2024-07-15 12:49:11.148705] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.163 [2024-07-15 12:49:11.148749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.148810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.148828] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:53.163 [2024-07-15 12:49:11.148840] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:53.163 [2024-07-15 12:49:11.148849] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:53.163 [2024-07-15 12:49:11.148856] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:53.163 [2024-07-15 12:49:11.148864] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:53.163 [2024-07-15 12:49:11.148872] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:53.163 [2024-07-15 12:49:11.148880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.148894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.148910] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.148925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.148947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.163 [2024-07-15 12:49:11.148960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.163 [2024-07-15 12:49:11.148972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.163 [2024-07-15 12:49:11.148984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:53.163 [2024-07-15 12:49:11.148993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149008] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149069] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:53.163 [2024-07-15 12:49:11.149077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149127] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149228] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:53.163 [2024-07-15 12:49:11.149236] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:53.163 [2024-07-15 12:49:11.149245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149279] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:53.163 [2024-07-15 12:49:11.149295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149321] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.163 [2024-07-15 12:49:11.149329] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.163 [2024-07-15 12:49:11.149338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149413] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:53.163 [2024-07-15 12:49:11.149421] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.163 [2024-07-15 12:49:11.149430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149518] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:53.163 [2024-07-15 12:49:11.149526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:53.163 [2024-07-15 12:49:11.149534] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:53.163 [2024-07-15 12:49:11.149559] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149595] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149686] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:53.163 [2024-07-15 12:49:11.149696] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:53.163 [2024-07-15 12:49:11.149702] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:53.163 [2024-07-15 12:49:11.149707] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:53.163 [2024-07-15 12:49:11.149716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:53.163 [2024-07-15 12:49:11.149757] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:53.163 [2024-07-15 12:49:11.149767] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:53.163 [2024-07-15 12:49:11.149777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149792] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:53.163 [2024-07-15 12:49:11.149801] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:53.163 [2024-07-15 12:49:11.149810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149822] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:53.163 [2024-07-15 12:49:11.149830] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:53.163 [2024-07-15 12:49:11.149839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:53.163 [2024-07-15 12:49:11.149851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:53.163 [2024-07-15 12:49:11.149901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:53.163 ===================================================== 00:09:53.163 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:53.163 ===================================================== 00:09:53.163 Controller Capabilities/Features 00:09:53.163 ================================ 00:09:53.163 Vendor ID: 4e58 00:09:53.163 Subsystem Vendor ID: 4e58 00:09:53.163 Serial Number: SPDK1 00:09:53.163 Model Number: SPDK bdev Controller 00:09:53.163 Firmware Version: 24.09 00:09:53.163 Recommended Arb Burst: 6 00:09:53.163 IEEE OUI Identifier: 8d 6b 50 00:09:53.163 Multi-path I/O 00:09:53.163 May have multiple subsystem ports: Yes 00:09:53.163 May have multiple controllers: Yes 00:09:53.163 Associated with SR-IOV VF: No 00:09:53.163 Max Data Transfer Size: 131072 00:09:53.163 Max Number of Namespaces: 32 00:09:53.163 Max Number of I/O Queues: 127 00:09:53.163 NVMe Specification Version (VS): 1.3 00:09:53.163 NVMe Specification Version (Identify): 1.3 00:09:53.163 Maximum Queue Entries: 256 00:09:53.163 Contiguous Queues Required: Yes 00:09:53.163 Arbitration Mechanisms Supported 00:09:53.163 Weighted Round Robin: Not Supported 00:09:53.163 Vendor Specific: Not Supported 00:09:53.163 Reset Timeout: 15000 ms 00:09:53.163 Doorbell Stride: 4 bytes 00:09:53.163 NVM Subsystem Reset: Not Supported 00:09:53.163 Command Sets Supported 00:09:53.163 NVM Command Set: Supported 00:09:53.163 Boot Partition: Not Supported 00:09:53.163 Memory Page Size Minimum: 4096 bytes 00:09:53.163 Memory Page Size Maximum: 4096 bytes 00:09:53.163 Persistent Memory Region: Not Supported 00:09:53.163 Optional Asynchronous Events Supported 00:09:53.163 Namespace Attribute Notices: Supported 00:09:53.163 Firmware Activation Notices: Not Supported 00:09:53.163 ANA Change Notices: Not Supported 00:09:53.163 PLE Aggregate Log Change Notices: Not Supported 00:09:53.163 LBA Status Info Alert Notices: Not Supported 00:09:53.163 EGE Aggregate Log Change Notices: Not Supported 00:09:53.163 Normal NVM Subsystem Shutdown event: Not Supported 00:09:53.163 Zone Descriptor Change Notices: Not Supported 00:09:53.163 Discovery Log Change Notices: Not Supported 00:09:53.163 Controller Attributes 00:09:53.163 128-bit Host Identifier: Supported 00:09:53.163 Non-Operational Permissive Mode: Not Supported 00:09:53.163 NVM Sets: Not Supported 00:09:53.163 Read Recovery Levels: Not Supported 00:09:53.163 Endurance Groups: Not Supported 00:09:53.163 Predictable Latency Mode: Not Supported 00:09:53.163 Traffic Based Keep ALive: Not Supported 00:09:53.163 Namespace Granularity: Not Supported 00:09:53.163 SQ Associations: Not Supported 00:09:53.163 UUID List: Not Supported 00:09:53.163 Multi-Domain Subsystem: Not Supported 00:09:53.163 Fixed Capacity Management: Not Supported 00:09:53.163 Variable Capacity Management: Not Supported 00:09:53.163 Delete Endurance Group: Not Supported 00:09:53.163 Delete NVM Set: Not Supported 00:09:53.163 Extended LBA Formats Supported: Not Supported 00:09:53.163 Flexible Data Placement Supported: Not Supported 00:09:53.163 00:09:53.163 Controller Memory Buffer Support 00:09:53.163 ================================ 00:09:53.163 Supported: No 00:09:53.163 00:09:53.163 Persistent Memory Region Support 00:09:53.163 ================================ 00:09:53.163 Supported: No 00:09:53.163 00:09:53.163 Admin Command Set Attributes 00:09:53.163 ============================ 00:09:53.163 Security Send/Receive: Not Supported 00:09:53.163 Format NVM: Not Supported 00:09:53.163 Firmware Activate/Download: Not Supported 00:09:53.163 Namespace Management: Not Supported 00:09:53.163 Device Self-Test: Not Supported 00:09:53.163 Directives: Not Supported 00:09:53.163 NVMe-MI: Not Supported 00:09:53.163 Virtualization Management: Not Supported 00:09:53.163 Doorbell Buffer Config: Not Supported 00:09:53.163 Get LBA Status Capability: Not Supported 00:09:53.163 Command & Feature Lockdown Capability: Not Supported 00:09:53.163 Abort Command Limit: 4 00:09:53.163 Async Event Request Limit: 4 00:09:53.163 Number of Firmware Slots: N/A 00:09:53.163 Firmware Slot 1 Read-Only: N/A 00:09:53.163 Firmware Activation Without Reset: N/A 00:09:53.163 Multiple Update Detection Support: N/A 00:09:53.163 Firmware Update Granularity: No Information Provided 00:09:53.163 Per-Namespace SMART Log: No 00:09:53.163 Asymmetric Namespace Access Log Page: Not Supported 00:09:53.163 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:53.163 Command Effects Log Page: Supported 00:09:53.163 Get Log Page Extended Data: Supported 00:09:53.163 Telemetry Log Pages: Not Supported 00:09:53.163 Persistent Event Log Pages: Not Supported 00:09:53.163 Supported Log Pages Log Page: May Support 00:09:53.163 Commands Supported & Effects Log Page: Not Supported 00:09:53.163 Feature Identifiers & Effects Log Page:May Support 00:09:53.163 NVMe-MI Commands & Effects Log Page: May Support 00:09:53.163 Data Area 4 for Telemetry Log: Not Supported 00:09:53.163 Error Log Page Entries Supported: 128 00:09:53.163 Keep Alive: Supported 00:09:53.163 Keep Alive Granularity: 10000 ms 00:09:53.163 00:09:53.163 NVM Command Set Attributes 00:09:53.163 ========================== 00:09:53.163 Submission Queue Entry Size 00:09:53.163 Max: 64 00:09:53.163 Min: 64 00:09:53.163 Completion Queue Entry Size 00:09:53.163 Max: 16 00:09:53.163 Min: 16 00:09:53.163 Number of Namespaces: 32 00:09:53.163 Compare Command: Supported 00:09:53.163 Write Uncorrectable Command: Not Supported 00:09:53.163 Dataset Management Command: Supported 00:09:53.163 Write Zeroes Command: Supported 00:09:53.163 Set Features Save Field: Not Supported 00:09:53.163 Reservations: Not Supported 00:09:53.163 Timestamp: Not Supported 00:09:53.163 Copy: Supported 00:09:53.163 Volatile Write Cache: Present 00:09:53.163 Atomic Write Unit (Normal): 1 00:09:53.163 Atomic Write Unit (PFail): 1 00:09:53.163 Atomic Compare & Write Unit: 1 00:09:53.163 Fused Compare & Write: Supported 00:09:53.163 Scatter-Gather List 00:09:53.163 SGL Command Set: Supported (Dword aligned) 00:09:53.163 SGL Keyed: Not Supported 00:09:53.163 SGL Bit Bucket Descriptor: Not Supported 00:09:53.163 SGL Metadata Pointer: Not Supported 00:09:53.163 Oversized SGL: Not Supported 00:09:53.163 SGL Metadata Address: Not Supported 00:09:53.163 SGL Offset: Not Supported 00:09:53.163 Transport SGL Data Block: Not Supported 00:09:53.163 Replay Protected Memory Block: Not Supported 00:09:53.163 00:09:53.163 Firmware Slot Information 00:09:53.163 ========================= 00:09:53.163 Active slot: 1 00:09:53.163 Slot 1 Firmware Revision: 24.09 00:09:53.163 00:09:53.164 00:09:53.164 Commands Supported and Effects 00:09:53.164 ============================== 00:09:53.164 Admin Commands 00:09:53.164 -------------- 00:09:53.164 Get Log Page (02h): Supported 00:09:53.164 Identify (06h): Supported 00:09:53.164 Abort (08h): Supported 00:09:53.164 Set Features (09h): Supported 00:09:53.164 Get Features (0Ah): Supported 00:09:53.164 Asynchronous Event Request (0Ch): Supported 00:09:53.164 Keep Alive (18h): Supported 00:09:53.164 I/O Commands 00:09:53.164 ------------ 00:09:53.164 Flush (00h): Supported LBA-Change 00:09:53.164 Write (01h): Supported LBA-Change 00:09:53.164 Read (02h): Supported 00:09:53.164 Compare (05h): Supported 00:09:53.164 Write Zeroes (08h): Supported LBA-Change 00:09:53.164 Dataset Management (09h): Supported LBA-Change 00:09:53.164 Copy (19h): Supported LBA-Change 00:09:53.164 00:09:53.164 Error Log 00:09:53.164 ========= 00:09:53.164 00:09:53.164 Arbitration 00:09:53.164 =========== 00:09:53.164 Arbitration Burst: 1 00:09:53.164 00:09:53.164 Power Management 00:09:53.164 ================ 00:09:53.164 Number of Power States: 1 00:09:53.164 Current Power State: Power State #0 00:09:53.164 Power State #0: 00:09:53.164 Max Power: 0.00 W 00:09:53.164 Non-Operational State: Operational 00:09:53.164 Entry Latency: Not Reported 00:09:53.164 Exit Latency: Not Reported 00:09:53.164 Relative Read Throughput: 0 00:09:53.164 Relative Read Latency: 0 00:09:53.164 Relative Write Throughput: 0 00:09:53.164 Relative Write Latency: 0 00:09:53.164 Idle Power: Not Reported 00:09:53.164 Active Power: Not Reported 00:09:53.164 Non-Operational Permissive Mode: Not Supported 00:09:53.164 00:09:53.164 Health Information 00:09:53.164 ================== 00:09:53.164 Critical Warnings: 00:09:53.164 Available Spare Space: OK 00:09:53.164 Temperature: OK 00:09:53.164 Device Reliability: OK 00:09:53.164 Read Only: No 00:09:53.164 Volatile Memory Backup: OK 00:09:53.164 Current Temperature: 0 Kelvin (-273 Celsius) 00:09:53.164 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:53.164 Available Spare: 0% 00:09:53.164 Available Sp[2024-07-15 12:49:11.150022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:53.164 [2024-07-15 12:49:11.150053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:53.164 [2024-07-15 12:49:11.150099] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:53.164 [2024-07-15 12:49:11.150116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.164 [2024-07-15 12:49:11.150127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.164 [2024-07-15 12:49:11.150137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.164 [2024-07-15 12:49:11.150147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:53.164 [2024-07-15 12:49:11.150610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:53.164 [2024-07-15 12:49:11.150631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:53.164 [2024-07-15 12:49:11.151613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.164 [2024-07-15 12:49:11.151696] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:53.164 [2024-07-15 12:49:11.151716] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:53.164 [2024-07-15 12:49:11.152622] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:53.164 [2024-07-15 12:49:11.152645] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:53.164 [2024-07-15 12:49:11.152699] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:53.164 [2024-07-15 12:49:11.157749] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:53.164 are Threshold: 0% 00:09:53.164 Life Percentage Used: 0% 00:09:53.164 Data Units Read: 0 00:09:53.164 Data Units Written: 0 00:09:53.164 Host Read Commands: 0 00:09:53.164 Host Write Commands: 0 00:09:53.164 Controller Busy Time: 0 minutes 00:09:53.164 Power Cycles: 0 00:09:53.164 Power On Hours: 0 hours 00:09:53.164 Unsafe Shutdowns: 0 00:09:53.164 Unrecoverable Media Errors: 0 00:09:53.164 Lifetime Error Log Entries: 0 00:09:53.164 Warning Temperature Time: 0 minutes 00:09:53.164 Critical Temperature Time: 0 minutes 00:09:53.164 00:09:53.164 Number of Queues 00:09:53.164 ================ 00:09:53.164 Number of I/O Submission Queues: 127 00:09:53.164 Number of I/O Completion Queues: 127 00:09:53.164 00:09:53.164 Active Namespaces 00:09:53.164 ================= 00:09:53.164 Namespace ID:1 00:09:53.164 Error Recovery Timeout: Unlimited 00:09:53.164 Command Set Identifier: NVM (00h) 00:09:53.164 Deallocate: Supported 00:09:53.164 Deallocated/Unwritten Error: Not Supported 00:09:53.164 Deallocated Read Value: Unknown 00:09:53.164 Deallocate in Write Zeroes: Not Supported 00:09:53.164 Deallocated Guard Field: 0xFFFF 00:09:53.164 Flush: Supported 00:09:53.164 Reservation: Supported 00:09:53.164 Namespace Sharing Capabilities: Multiple Controllers 00:09:53.164 Size (in LBAs): 131072 (0GiB) 00:09:53.164 Capacity (in LBAs): 131072 (0GiB) 00:09:53.164 Utilization (in LBAs): 131072 (0GiB) 00:09:53.164 NGUID: 0B7A24976F914CC58EDD25EE916B8B8D 00:09:53.164 UUID: 0b7a2497-6f91-4cc5-8edd-25ee916b8b8d 00:09:53.164 Thin Provisioning: Not Supported 00:09:53.164 Per-NS Atomic Units: Yes 00:09:53.164 Atomic Boundary Size (Normal): 0 00:09:53.164 Atomic Boundary Size (PFail): 0 00:09:53.164 Atomic Boundary Offset: 0 00:09:53.164 Maximum Single Source Range Length: 65535 00:09:53.164 Maximum Copy Length: 65535 00:09:53.164 Maximum Source Range Count: 1 00:09:53.164 NGUID/EUI64 Never Reused: No 00:09:53.164 Namespace Write Protected: No 00:09:53.164 Number of LBA Formats: 1 00:09:53.164 Current LBA Format: LBA Format #00 00:09:53.164 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:53.164 00:09:53.164 12:49:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:53.164 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.423 [2024-07-15 12:49:11.387564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.709 Initializing NVMe Controllers 00:09:58.709 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:58.709 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:58.709 Initialization complete. Launching workers. 00:09:58.709 ======================================================== 00:09:58.709 Latency(us) 00:09:58.710 Device Information : IOPS MiB/s Average min max 00:09:58.710 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34920.18 136.41 3665.73 1162.53 7365.01 00:09:58.710 ======================================================== 00:09:58.710 Total : 34920.18 136.41 3665.73 1162.53 7365.01 00:09:58.710 00:09:58.710 [2024-07-15 12:49:16.410584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.710 12:49:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:58.710 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.710 [2024-07-15 12:49:16.645783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:03.984 Initializing NVMe Controllers 00:10:03.984 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:03.984 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:10:03.984 Initialization complete. Launching workers. 00:10:03.984 ======================================================== 00:10:03.984 Latency(us) 00:10:03.984 Device Information : IOPS MiB/s Average min max 00:10:03.984 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15987.60 62.45 8016.19 5987.59 15793.80 00:10:03.984 ======================================================== 00:10:03.984 Total : 15987.60 62.45 8016.19 5987.59 15793.80 00:10:03.984 00:10:03.984 [2024-07-15 12:49:21.685019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:03.984 12:49:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:03.984 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.985 [2024-07-15 12:49:21.887035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:09.291 [2024-07-15 12:49:26.953092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:09.291 Initializing NVMe Controllers 00:10:09.291 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.291 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:10:09.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:10:09.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:10:09.291 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:10:09.291 Initialization complete. Launching workers. 00:10:09.291 Starting thread on core 2 00:10:09.291 Starting thread on core 3 00:10:09.291 Starting thread on core 1 00:10:09.291 12:49:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:10:09.291 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.291 [2024-07-15 12:49:27.257268] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.582 [2024-07-15 12:49:30.321682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.582 Initializing NVMe Controllers 00:10:12.582 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.582 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.582 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:10:12.582 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:10:12.582 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:10:12.582 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:10:12.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:12.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:12.582 Initialization complete. Launching workers. 00:10:12.582 Starting thread on core 1 with urgent priority queue 00:10:12.582 Starting thread on core 2 with urgent priority queue 00:10:12.582 Starting thread on core 3 with urgent priority queue 00:10:12.582 Starting thread on core 0 with urgent priority queue 00:10:12.582 SPDK bdev Controller (SPDK1 ) core 0: 4838.67 IO/s 20.67 secs/100000 ios 00:10:12.582 SPDK bdev Controller (SPDK1 ) core 1: 5166.00 IO/s 19.36 secs/100000 ios 00:10:12.582 SPDK bdev Controller (SPDK1 ) core 2: 5238.67 IO/s 19.09 secs/100000 ios 00:10:12.582 SPDK bdev Controller (SPDK1 ) core 3: 5129.00 IO/s 19.50 secs/100000 ios 00:10:12.582 ======================================================== 00:10:12.582 00:10:12.582 12:49:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.582 [2024-07-15 12:49:30.623005] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:12.582 Initializing NVMe Controllers 00:10:12.582 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.582 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:12.582 Namespace ID: 1 size: 0GB 00:10:12.582 Initialization complete. 00:10:12.582 INFO: using host memory buffer for IO 00:10:12.582 Hello world! 00:10:12.582 [2024-07-15 12:49:30.657598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:12.582 12:49:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:10:12.582 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.840 [2024-07-15 12:49:30.961839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:13.773 Initializing NVMe Controllers 00:10:13.773 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.773 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:13.773 Initialization complete. Launching workers. 00:10:13.773 submit (in ns) avg, min, max = 7264.9, 3508.9, 4024594.4 00:10:13.773 complete (in ns) avg, min, max = 26814.1, 2061.1, 4018162.2 00:10:13.773 00:10:13.773 Submit histogram 00:10:13.773 ================ 00:10:13.773 Range in us Cumulative Count 00:10:13.773 3.508 - 3.532: 0.0297% ( 4) 00:10:13.773 3.532 - 3.556: 0.2450% ( 29) 00:10:13.773 3.556 - 3.579: 1.0244% ( 105) 00:10:13.773 3.579 - 3.603: 3.2440% ( 299) 00:10:13.773 3.603 - 3.627: 7.0076% ( 507) 00:10:13.774 3.627 - 3.650: 14.1563% ( 963) 00:10:13.774 3.650 - 3.674: 22.7155% ( 1153) 00:10:13.774 3.674 - 3.698: 32.5143% ( 1320) 00:10:13.774 3.698 - 3.721: 40.9843% ( 1141) 00:10:13.774 3.721 - 3.745: 48.0217% ( 948) 00:10:13.774 3.745 - 3.769: 53.4185% ( 727) 00:10:13.774 3.769 - 3.793: 57.7017% ( 577) 00:10:13.774 3.793 - 3.816: 61.4431% ( 504) 00:10:13.774 3.816 - 3.840: 64.4124% ( 400) 00:10:13.774 3.840 - 3.864: 67.8346% ( 461) 00:10:13.774 3.864 - 3.887: 71.4349% ( 485) 00:10:13.774 3.887 - 3.911: 75.5104% ( 549) 00:10:13.774 3.911 - 3.935: 79.5858% ( 549) 00:10:13.774 3.935 - 3.959: 82.9189% ( 449) 00:10:13.774 3.959 - 3.982: 85.4651% ( 343) 00:10:13.774 3.982 - 4.006: 87.6995% ( 301) 00:10:13.774 4.006 - 4.030: 89.3326% ( 220) 00:10:13.774 4.030 - 4.053: 90.6466% ( 177) 00:10:13.774 4.053 - 4.077: 91.5968% ( 128) 00:10:13.774 4.077 - 4.101: 92.3688% ( 104) 00:10:13.774 4.101 - 4.124: 93.2893% ( 124) 00:10:13.774 4.124 - 4.148: 94.0613% ( 104) 00:10:13.774 4.148 - 4.172: 94.6478% ( 79) 00:10:13.774 4.172 - 4.196: 95.0486% ( 54) 00:10:13.774 4.196 - 4.219: 95.5831% ( 72) 00:10:13.774 4.219 - 4.243: 95.9617% ( 51) 00:10:13.774 4.243 - 4.267: 96.1695% ( 28) 00:10:13.774 4.267 - 4.290: 96.3551% ( 25) 00:10:13.774 4.290 - 4.314: 96.4962% ( 19) 00:10:13.774 4.314 - 4.338: 96.6150% ( 16) 00:10:13.774 4.338 - 4.361: 96.7486% ( 18) 00:10:13.774 4.361 - 4.385: 96.8005% ( 7) 00:10:13.774 4.385 - 4.409: 96.9193% ( 16) 00:10:13.774 4.409 - 4.433: 96.9861% ( 9) 00:10:13.774 4.433 - 4.456: 97.0752% ( 12) 00:10:13.774 4.456 - 4.480: 97.1123% ( 5) 00:10:13.774 4.480 - 4.504: 97.1420% ( 4) 00:10:13.774 4.504 - 4.527: 97.1717% ( 4) 00:10:13.774 4.527 - 4.551: 97.2162% ( 6) 00:10:13.774 4.575 - 4.599: 97.2608% ( 6) 00:10:13.774 4.599 - 4.622: 97.2756% ( 2) 00:10:13.774 4.622 - 4.646: 97.2831% ( 1) 00:10:13.774 4.646 - 4.670: 97.2905% ( 1) 00:10:13.774 4.670 - 4.693: 97.2979% ( 1) 00:10:13.774 4.693 - 4.717: 97.3053% ( 1) 00:10:13.774 4.741 - 4.764: 97.3127% ( 1) 00:10:13.774 4.764 - 4.788: 97.3202% ( 1) 00:10:13.774 4.836 - 4.859: 97.3276% ( 1) 00:10:13.774 4.859 - 4.883: 97.3424% ( 2) 00:10:13.774 4.883 - 4.907: 97.3573% ( 2) 00:10:13.774 4.907 - 4.930: 97.3944% ( 5) 00:10:13.774 4.930 - 4.954: 97.4092% ( 2) 00:10:13.774 4.954 - 4.978: 97.4464% ( 5) 00:10:13.774 4.978 - 5.001: 97.4686% ( 3) 00:10:13.774 5.001 - 5.025: 97.5058% ( 5) 00:10:13.774 5.025 - 5.049: 97.5503% ( 6) 00:10:13.774 5.049 - 5.073: 97.6097% ( 8) 00:10:13.774 5.073 - 5.096: 97.6542% ( 6) 00:10:13.774 5.096 - 5.120: 97.6988% ( 6) 00:10:13.774 5.120 - 5.144: 97.7507% ( 7) 00:10:13.774 5.144 - 5.167: 97.7656% ( 2) 00:10:13.774 5.167 - 5.191: 97.8027% ( 5) 00:10:13.774 5.191 - 5.215: 97.8547% ( 7) 00:10:13.774 5.215 - 5.239: 97.8695% ( 2) 00:10:13.774 5.239 - 5.262: 97.8843% ( 2) 00:10:13.774 5.262 - 5.286: 97.9066% ( 3) 00:10:13.774 5.286 - 5.310: 97.9512% ( 6) 00:10:13.774 5.310 - 5.333: 97.9660% ( 2) 00:10:13.774 5.333 - 5.357: 98.0105% ( 6) 00:10:13.774 5.357 - 5.381: 98.0180% ( 1) 00:10:13.774 5.381 - 5.404: 98.0254% ( 1) 00:10:13.774 5.404 - 5.428: 98.0402% ( 2) 00:10:13.774 5.428 - 5.452: 98.0848% ( 6) 00:10:13.774 5.476 - 5.499: 98.0922% ( 1) 00:10:13.774 5.499 - 5.523: 98.1070% ( 2) 00:10:13.774 5.523 - 5.547: 98.1145% ( 1) 00:10:13.774 5.547 - 5.570: 98.1219% ( 1) 00:10:13.774 5.618 - 5.641: 98.1293% ( 1) 00:10:13.774 5.665 - 5.689: 98.1516% ( 3) 00:10:13.774 5.689 - 5.713: 98.1590% ( 1) 00:10:13.774 5.736 - 5.760: 98.1664% ( 1) 00:10:13.774 5.760 - 5.784: 98.1739% ( 1) 00:10:13.774 5.997 - 6.021: 98.1813% ( 1) 00:10:13.774 6.163 - 6.210: 98.1887% ( 1) 00:10:13.774 6.305 - 6.353: 98.1961% ( 1) 00:10:13.774 6.637 - 6.684: 98.2035% ( 1) 00:10:13.774 6.684 - 6.732: 98.2184% ( 2) 00:10:13.774 6.827 - 6.874: 98.2332% ( 2) 00:10:13.774 6.874 - 6.921: 98.2407% ( 1) 00:10:13.774 6.921 - 6.969: 98.2481% ( 1) 00:10:13.774 7.111 - 7.159: 98.2555% ( 1) 00:10:13.774 7.253 - 7.301: 98.2629% ( 1) 00:10:13.774 7.301 - 7.348: 98.2778% ( 2) 00:10:13.774 7.348 - 7.396: 98.2852% ( 1) 00:10:13.774 7.396 - 7.443: 98.3149% ( 4) 00:10:13.774 7.538 - 7.585: 98.3297% ( 2) 00:10:13.774 7.633 - 7.680: 98.3446% ( 2) 00:10:13.774 7.822 - 7.870: 98.3594% ( 2) 00:10:13.774 7.870 - 7.917: 98.3743% ( 2) 00:10:13.774 7.917 - 7.964: 98.3817% ( 1) 00:10:13.774 7.964 - 8.012: 98.4040% ( 3) 00:10:13.774 8.012 - 8.059: 98.4188% ( 2) 00:10:13.774 8.107 - 8.154: 98.4262% ( 1) 00:10:13.774 8.201 - 8.249: 98.4337% ( 1) 00:10:13.774 8.249 - 8.296: 98.4485% ( 2) 00:10:13.774 8.391 - 8.439: 98.4559% ( 1) 00:10:13.774 8.439 - 8.486: 98.4708% ( 2) 00:10:13.774 8.486 - 8.533: 98.4782% ( 1) 00:10:13.774 8.533 - 8.581: 98.4856% ( 1) 00:10:13.774 8.628 - 8.676: 98.4931% ( 1) 00:10:13.774 8.676 - 8.723: 98.5228% ( 4) 00:10:13.774 8.723 - 8.770: 98.5302% ( 1) 00:10:13.774 8.865 - 8.913: 98.5450% ( 2) 00:10:13.774 8.913 - 8.960: 98.5599% ( 2) 00:10:13.774 9.007 - 9.055: 98.5821% ( 3) 00:10:13.774 9.055 - 9.102: 98.5896% ( 1) 00:10:13.774 9.339 - 9.387: 98.5970% ( 1) 00:10:13.774 9.387 - 9.434: 98.6044% ( 1) 00:10:13.774 9.529 - 9.576: 98.6193% ( 2) 00:10:13.774 9.576 - 9.624: 98.6341% ( 2) 00:10:13.774 9.624 - 9.671: 98.6564% ( 3) 00:10:13.774 9.671 - 9.719: 98.6638% ( 1) 00:10:13.774 9.766 - 9.813: 98.6712% ( 1) 00:10:13.774 9.813 - 9.861: 98.6786% ( 1) 00:10:13.774 9.956 - 10.003: 98.6861% ( 1) 00:10:13.774 10.050 - 10.098: 98.6935% ( 1) 00:10:13.774 10.098 - 10.145: 98.7009% ( 1) 00:10:13.774 10.287 - 10.335: 98.7083% ( 1) 00:10:13.774 10.382 - 10.430: 98.7232% ( 2) 00:10:13.774 10.524 - 10.572: 98.7306% ( 1) 00:10:13.774 10.619 - 10.667: 98.7380% ( 1) 00:10:13.774 10.809 - 10.856: 98.7455% ( 1) 00:10:13.774 10.856 - 10.904: 98.7529% ( 1) 00:10:13.774 10.904 - 10.951: 98.7603% ( 1) 00:10:13.774 10.951 - 10.999: 98.7677% ( 1) 00:10:13.774 10.999 - 11.046: 98.7826% ( 2) 00:10:13.774 11.093 - 11.141: 98.7900% ( 1) 00:10:13.774 11.188 - 11.236: 98.8048% ( 2) 00:10:13.774 11.283 - 11.330: 98.8123% ( 1) 00:10:13.774 11.330 - 11.378: 98.8197% ( 1) 00:10:13.774 11.473 - 11.520: 98.8271% ( 1) 00:10:13.774 11.947 - 11.994: 98.8345% ( 1) 00:10:13.774 12.041 - 12.089: 98.8420% ( 1) 00:10:13.774 12.136 - 12.231: 98.8568% ( 2) 00:10:13.774 12.231 - 12.326: 98.8642% ( 1) 00:10:13.774 12.326 - 12.421: 98.8717% ( 1) 00:10:13.774 12.421 - 12.516: 98.8791% ( 1) 00:10:13.774 12.895 - 12.990: 98.8865% ( 1) 00:10:13.774 12.990 - 13.084: 98.9013% ( 2) 00:10:13.774 13.179 - 13.274: 98.9088% ( 1) 00:10:13.774 13.274 - 13.369: 98.9162% ( 1) 00:10:13.774 13.369 - 13.464: 98.9310% ( 2) 00:10:13.774 13.464 - 13.559: 98.9385% ( 1) 00:10:13.774 13.748 - 13.843: 98.9459% ( 1) 00:10:13.774 14.127 - 14.222: 98.9533% ( 1) 00:10:13.774 14.412 - 14.507: 98.9607% ( 1) 00:10:13.774 14.886 - 14.981: 98.9682% ( 1) 00:10:13.774 15.076 - 15.170: 98.9756% ( 1) 00:10:13.774 15.360 - 15.455: 98.9830% ( 1) 00:10:13.774 16.972 - 17.067: 98.9904% ( 1) 00:10:13.774 17.161 - 17.256: 99.0127% ( 3) 00:10:14.032 17.256 - 17.351: 99.0275% ( 2) 00:10:14.032 17.351 - 17.446: 99.0424% ( 2) 00:10:14.032 17.446 - 17.541: 99.0869% ( 6) 00:10:14.032 17.541 - 17.636: 99.1537% ( 9) 00:10:14.032 17.636 - 17.730: 99.1834% ( 4) 00:10:14.032 17.730 - 17.825: 99.2354% ( 7) 00:10:14.032 17.825 - 17.920: 99.2874% ( 7) 00:10:14.032 17.920 - 18.015: 99.3096% ( 3) 00:10:14.032 18.015 - 18.110: 99.3467% ( 5) 00:10:14.032 18.110 - 18.204: 99.3987% ( 7) 00:10:14.032 18.204 - 18.299: 99.4655% ( 9) 00:10:14.032 18.299 - 18.394: 99.5026% ( 5) 00:10:14.032 18.394 - 18.489: 99.5620% ( 8) 00:10:14.032 18.489 - 18.584: 99.6214% ( 8) 00:10:14.032 18.584 - 18.679: 99.6734% ( 7) 00:10:14.032 18.679 - 18.773: 99.7105% ( 5) 00:10:14.032 18.773 - 18.868: 99.7253% ( 2) 00:10:14.032 18.868 - 18.963: 99.7402% ( 2) 00:10:14.032 18.963 - 19.058: 99.7699% ( 4) 00:10:14.032 19.058 - 19.153: 99.7773% ( 1) 00:10:14.032 19.153 - 19.247: 99.7847% ( 1) 00:10:14.032 19.247 - 19.342: 99.7921% ( 1) 00:10:14.032 19.342 - 19.437: 99.7996% ( 1) 00:10:14.032 19.437 - 19.532: 99.8070% ( 1) 00:10:14.032 19.532 - 19.627: 99.8144% ( 1) 00:10:14.032 19.627 - 19.721: 99.8218% ( 1) 00:10:14.032 19.911 - 20.006: 99.8293% ( 1) 00:10:14.032 20.006 - 20.101: 99.8367% ( 1) 00:10:14.032 20.101 - 20.196: 99.8515% ( 2) 00:10:14.032 21.618 - 21.713: 99.8590% ( 1) 00:10:14.032 22.092 - 22.187: 99.8664% ( 1) 00:10:14.032 23.324 - 23.419: 99.8738% ( 1) 00:10:14.032 23.514 - 23.609: 99.8812% ( 1) 00:10:14.032 24.178 - 24.273: 99.8886% ( 1) 00:10:14.032 24.462 - 24.652: 99.8961% ( 1) 00:10:14.032 24.841 - 25.031: 99.9035% ( 1) 00:10:14.032 25.031 - 25.221: 99.9109% ( 1) 00:10:14.032 29.203 - 29.393: 99.9183% ( 1) 00:10:14.032 3980.705 - 4004.978: 99.9629% ( 6) 00:10:14.032 4004.978 - 4029.250: 100.0000% ( 5) 00:10:14.032 00:10:14.032 Complete histogram 00:10:14.032 ================== 00:10:14.032 Range in us Cumulative Count 00:10:14.032 2.050 - 2.062: 0.0223% ( 3) 00:10:14.032 2.062 - 2.074: 16.6580% ( 2241) 00:10:14.032 2.074 - 2.086: 40.4127% ( 3200) 00:10:14.032 2.086 - 2.098: 42.2686% ( 250) 00:10:14.032 2.098 - 2.110: 55.3188% ( 1758) 00:10:14.032 2.110 - 2.121: 60.7230% ( 728) 00:10:14.032 2.121 - 2.133: 62.3487% ( 219) 00:10:14.032 2.133 - 2.145: 70.9673% ( 1161) 00:10:14.032 2.145 - 2.157: 75.4435% ( 603) 00:10:14.032 2.157 - 2.169: 76.6684% ( 165) 00:10:14.032 2.169 - 2.181: 80.6473% ( 536) 00:10:14.032 2.181 - 2.193: 82.4735% ( 246) 00:10:14.032 2.193 - 2.204: 83.2306% ( 102) 00:10:14.032 2.204 - 2.216: 86.4524% ( 434) 00:10:14.032 2.216 - 2.228: 89.1025% ( 357) 00:10:14.032 2.228 - 2.240: 90.8619% ( 237) 00:10:14.032 2.240 - 2.252: 92.8736% ( 271) 00:10:14.032 2.252 - 2.264: 93.7644% ( 120) 00:10:14.032 2.264 - 2.276: 94.0687% ( 41) 00:10:14.032 2.276 - 2.287: 94.3731% ( 41) 00:10:14.032 2.287 - 2.299: 94.7517% ( 51) 00:10:14.032 2.299 - 2.311: 95.2713% ( 70) 00:10:14.032 2.311 - 2.323: 95.5089% ( 32) 00:10:14.032 2.323 - 2.335: 95.5980% ( 12) 00:10:14.032 2.335 - 2.347: 95.6796% ( 11) 00:10:14.032 2.347 - 2.359: 95.7538% ( 10) 00:10:14.032 2.359 - 2.370: 95.9320% ( 24) 00:10:14.032 2.370 - 2.382: 96.2735% ( 46) 00:10:14.032 2.382 - 2.394: 96.6224% ( 47) 00:10:14.032 2.394 - 2.406: 96.9490% ( 44) 00:10:14.032 2.406 - 2.418: 97.1346% ( 25) 00:10:14.032 2.418 - 2.430: 97.3202% ( 25) 00:10:14.032 2.430 - 2.441: 97.5726% ( 34) 00:10:14.032 2.441 - 2.453: 97.7136% ( 19) 00:10:14.032 2.453 - 2.465: 97.8769% ( 22) 00:10:14.032 2.465 - 2.477: 97.9957% ( 16) 00:10:14.032 2.477 - 2.489: 98.0922% ( 13) 00:10:14.032 2.489 - 2.501: 98.1887% ( 13) 00:10:14.032 2.501 - 2.513: 98.2481% ( 8) 00:10:14.032 2.513 - 2.524: 98.2778% ( 4) 00:10:14.032 2.524 - 2.536: 98.3223% ( 6) 00:10:14.032 2.536 - 2.548: 98.4040% ( 11) 00:10:14.032 2.548 - 2.560: 98.4485% ( 6) 00:10:14.032 2.560 - 2.572: 98.4708% ( 3) 00:10:14.032 2.572 - 2.584: 98.4782% ( 1) 00:10:14.032 2.584 - 2.596: 9[2024-07-15 12:49:31.986039] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.032 8.4856% ( 1) 00:10:14.032 2.596 - 2.607: 98.5005% ( 2) 00:10:14.032 2.607 - 2.619: 98.5079% ( 1) 00:10:14.032 2.619 - 2.631: 98.5153% ( 1) 00:10:14.032 2.631 - 2.643: 98.5376% ( 3) 00:10:14.032 2.643 - 2.655: 98.5450% ( 1) 00:10:14.032 2.667 - 2.679: 98.5524% ( 1) 00:10:14.032 2.679 - 2.690: 98.5673% ( 2) 00:10:14.032 2.702 - 2.714: 98.5747% ( 1) 00:10:14.032 2.714 - 2.726: 98.5896% ( 2) 00:10:14.032 2.738 - 2.750: 98.6044% ( 2) 00:10:14.032 2.773 - 2.785: 98.6118% ( 1) 00:10:14.032 2.785 - 2.797: 98.6193% ( 1) 00:10:14.032 2.821 - 2.833: 98.6267% ( 1) 00:10:14.032 2.856 - 2.868: 98.6341% ( 1) 00:10:14.032 3.010 - 3.022: 98.6415% ( 1) 00:10:14.032 3.153 - 3.176: 98.6489% ( 1) 00:10:14.032 3.461 - 3.484: 98.6564% ( 1) 00:10:14.032 3.508 - 3.532: 98.6638% ( 1) 00:10:14.032 3.532 - 3.556: 98.6712% ( 1) 00:10:14.032 3.579 - 3.603: 98.6861% ( 2) 00:10:14.032 3.627 - 3.650: 98.6935% ( 1) 00:10:14.032 3.650 - 3.674: 98.7009% ( 1) 00:10:14.032 3.698 - 3.721: 98.7083% ( 1) 00:10:14.032 3.793 - 3.816: 98.7158% ( 1) 00:10:14.032 3.864 - 3.887: 98.7232% ( 1) 00:10:14.032 3.959 - 3.982: 98.7306% ( 1) 00:10:14.032 4.030 - 4.053: 98.7380% ( 1) 00:10:14.032 4.101 - 4.124: 98.7455% ( 1) 00:10:14.032 4.148 - 4.172: 98.7529% ( 1) 00:10:14.032 4.196 - 4.219: 98.7603% ( 1) 00:10:14.032 5.452 - 5.476: 98.7677% ( 1) 00:10:14.032 5.547 - 5.570: 98.7751% ( 1) 00:10:14.032 5.689 - 5.713: 98.7826% ( 1) 00:10:14.032 6.068 - 6.116: 98.7900% ( 1) 00:10:14.032 6.258 - 6.305: 98.7974% ( 1) 00:10:14.032 6.305 - 6.353: 98.8048% ( 1) 00:10:14.032 6.542 - 6.590: 98.8123% ( 1) 00:10:14.032 6.684 - 6.732: 98.8197% ( 1) 00:10:14.032 6.874 - 6.921: 98.8271% ( 1) 00:10:14.032 7.206 - 7.253: 98.8345% ( 1) 00:10:14.032 7.253 - 7.301: 98.8420% ( 1) 00:10:14.032 7.348 - 7.396: 98.8494% ( 1) 00:10:14.032 7.490 - 7.538: 98.8568% ( 1) 00:10:14.032 7.585 - 7.633: 98.8642% ( 1) 00:10:14.032 7.775 - 7.822: 98.8717% ( 1) 00:10:14.032 7.822 - 7.870: 98.8791% ( 1) 00:10:14.033 8.391 - 8.439: 98.8865% ( 1) 00:10:14.033 8.581 - 8.628: 98.8939% ( 1) 00:10:14.033 8.676 - 8.723: 98.9013% ( 1) 00:10:14.033 10.619 - 10.667: 98.9088% ( 1) 00:10:14.033 15.644 - 15.739: 98.9236% ( 2) 00:10:14.033 15.739 - 15.834: 98.9385% ( 2) 00:10:14.033 15.834 - 15.929: 98.9533% ( 2) 00:10:14.033 15.929 - 16.024: 98.9756% ( 3) 00:10:14.033 16.024 - 16.119: 98.9904% ( 2) 00:10:14.033 16.119 - 16.213: 99.0127% ( 3) 00:10:14.033 16.213 - 16.308: 99.0350% ( 3) 00:10:14.033 16.308 - 16.403: 99.0721% ( 5) 00:10:14.033 16.403 - 16.498: 99.0869% ( 2) 00:10:14.033 16.498 - 16.593: 99.1018% ( 2) 00:10:14.033 16.593 - 16.687: 99.1389% ( 5) 00:10:14.033 16.687 - 16.782: 99.1686% ( 4) 00:10:14.033 16.782 - 16.877: 99.1983% ( 4) 00:10:14.033 16.877 - 16.972: 99.2205% ( 3) 00:10:14.033 16.972 - 17.067: 99.2354% ( 2) 00:10:14.033 17.161 - 17.256: 99.2577% ( 3) 00:10:14.033 17.351 - 17.446: 99.2651% ( 1) 00:10:14.033 17.446 - 17.541: 99.2725% ( 1) 00:10:14.033 17.541 - 17.636: 99.2874% ( 2) 00:10:14.033 17.636 - 17.730: 99.3022% ( 2) 00:10:14.033 17.730 - 17.825: 99.3096% ( 1) 00:10:14.033 17.825 - 17.920: 99.3171% ( 1) 00:10:14.033 18.110 - 18.204: 99.3393% ( 3) 00:10:14.033 18.204 - 18.299: 99.3542% ( 2) 00:10:14.033 18.394 - 18.489: 99.3616% ( 1) 00:10:14.033 18.584 - 18.679: 99.3690% ( 1) 00:10:14.033 21.239 - 21.333: 99.3764% ( 1) 00:10:14.033 21.618 - 21.713: 99.3839% ( 1) 00:10:14.033 3203.982 - 3228.255: 99.3913% ( 1) 00:10:14.033 3980.705 - 4004.978: 99.7773% ( 52) 00:10:14.033 4004.978 - 4029.250: 100.0000% ( 30) 00:10:14.033 00:10:14.033 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:10:14.033 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:10:14.033 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:10:14.033 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:10:14.033 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.290 [ 00:10:14.290 { 00:10:14.290 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:14.290 "subtype": "Discovery", 00:10:14.290 "listen_addresses": [], 00:10:14.290 "allow_any_host": true, 00:10:14.290 "hosts": [] 00:10:14.290 }, 00:10:14.290 { 00:10:14.290 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:14.290 "subtype": "NVMe", 00:10:14.290 "listen_addresses": [ 00:10:14.290 { 00:10:14.290 "trtype": "VFIOUSER", 00:10:14.290 "adrfam": "IPv4", 00:10:14.290 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:14.290 "trsvcid": "0" 00:10:14.290 } 00:10:14.290 ], 00:10:14.290 "allow_any_host": true, 00:10:14.290 "hosts": [], 00:10:14.290 "serial_number": "SPDK1", 00:10:14.290 "model_number": "SPDK bdev Controller", 00:10:14.290 "max_namespaces": 32, 00:10:14.290 "min_cntlid": 1, 00:10:14.290 "max_cntlid": 65519, 00:10:14.290 "namespaces": [ 00:10:14.290 { 00:10:14.290 "nsid": 1, 00:10:14.290 "bdev_name": "Malloc1", 00:10:14.290 "name": "Malloc1", 00:10:14.290 "nguid": "0B7A24976F914CC58EDD25EE916B8B8D", 00:10:14.290 "uuid": "0b7a2497-6f91-4cc5-8edd-25ee916b8b8d" 00:10:14.290 } 00:10:14.290 ] 00:10:14.290 }, 00:10:14.290 { 00:10:14.290 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:14.290 "subtype": "NVMe", 00:10:14.290 "listen_addresses": [ 00:10:14.290 { 00:10:14.290 "trtype": "VFIOUSER", 00:10:14.290 "adrfam": "IPv4", 00:10:14.290 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:14.290 "trsvcid": "0" 00:10:14.290 } 00:10:14.290 ], 00:10:14.290 "allow_any_host": true, 00:10:14.290 "hosts": [], 00:10:14.290 "serial_number": "SPDK2", 00:10:14.290 "model_number": "SPDK bdev Controller", 00:10:14.290 "max_namespaces": 32, 00:10:14.290 "min_cntlid": 1, 00:10:14.290 "max_cntlid": 65519, 00:10:14.290 "namespaces": [ 00:10:14.290 { 00:10:14.290 "nsid": 1, 00:10:14.291 "bdev_name": "Malloc2", 00:10:14.291 "name": "Malloc2", 00:10:14.291 "nguid": "BEC968E0C5824A4391C4564ED0A8CF06", 00:10:14.291 "uuid": "bec968e0-c582-4a43-91c4-564ed0a8cf06" 00:10:14.291 } 00:10:14.291 ] 00:10:14.291 } 00:10:14.291 ] 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3338318 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:14.291 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:10:14.291 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.291 [2024-07-15 12:49:32.493255] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:10:14.549 Malloc3 00:10:14.549 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:10:14.805 [2024-07-15 12:49:32.850723] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:10:14.805 12:49:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:14.805 Asynchronous Event Request test 00:10:14.805 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.805 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:10:14.805 Registering asynchronous event callbacks... 00:10:14.805 Starting namespace attribute notice tests for all controllers... 00:10:14.805 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:14.805 aer_cb - Changed Namespace 00:10:14.805 Cleaning up... 00:10:15.063 [ 00:10:15.063 { 00:10:15.063 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:15.063 "subtype": "Discovery", 00:10:15.063 "listen_addresses": [], 00:10:15.063 "allow_any_host": true, 00:10:15.063 "hosts": [] 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:15.063 "subtype": "NVMe", 00:10:15.063 "listen_addresses": [ 00:10:15.063 { 00:10:15.063 "trtype": "VFIOUSER", 00:10:15.063 "adrfam": "IPv4", 00:10:15.063 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:15.063 "trsvcid": "0" 00:10:15.063 } 00:10:15.063 ], 00:10:15.063 "allow_any_host": true, 00:10:15.063 "hosts": [], 00:10:15.063 "serial_number": "SPDK1", 00:10:15.063 "model_number": "SPDK bdev Controller", 00:10:15.063 "max_namespaces": 32, 00:10:15.063 "min_cntlid": 1, 00:10:15.063 "max_cntlid": 65519, 00:10:15.063 "namespaces": [ 00:10:15.063 { 00:10:15.063 "nsid": 1, 00:10:15.063 "bdev_name": "Malloc1", 00:10:15.063 "name": "Malloc1", 00:10:15.063 "nguid": "0B7A24976F914CC58EDD25EE916B8B8D", 00:10:15.063 "uuid": "0b7a2497-6f91-4cc5-8edd-25ee916b8b8d" 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "nsid": 2, 00:10:15.063 "bdev_name": "Malloc3", 00:10:15.063 "name": "Malloc3", 00:10:15.063 "nguid": "F2C4239A23F14EDB862E26313BF939FE", 00:10:15.063 "uuid": "f2c4239a-23f1-4edb-862e-26313bf939fe" 00:10:15.063 } 00:10:15.063 ] 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:15.063 "subtype": "NVMe", 00:10:15.063 "listen_addresses": [ 00:10:15.063 { 00:10:15.063 "trtype": "VFIOUSER", 00:10:15.063 "adrfam": "IPv4", 00:10:15.063 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:15.063 "trsvcid": "0" 00:10:15.063 } 00:10:15.063 ], 00:10:15.063 "allow_any_host": true, 00:10:15.063 "hosts": [], 00:10:15.063 "serial_number": "SPDK2", 00:10:15.063 "model_number": "SPDK bdev Controller", 00:10:15.063 "max_namespaces": 32, 00:10:15.063 "min_cntlid": 1, 00:10:15.063 "max_cntlid": 65519, 00:10:15.063 "namespaces": [ 00:10:15.063 { 00:10:15.063 "nsid": 1, 00:10:15.063 "bdev_name": "Malloc2", 00:10:15.063 "name": "Malloc2", 00:10:15.063 "nguid": "BEC968E0C5824A4391C4564ED0A8CF06", 00:10:15.063 "uuid": "bec968e0-c582-4a43-91c4-564ed0a8cf06" 00:10:15.063 } 00:10:15.063 ] 00:10:15.063 } 00:10:15.063 ] 00:10:15.063 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3338318 00:10:15.063 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:15.063 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:15.063 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:10:15.063 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:10:15.063 [2024-07-15 12:49:33.148313] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:10:15.063 [2024-07-15 12:49:33.148355] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3338456 ] 00:10:15.063 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.063 [2024-07-15 12:49:33.183893] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:10:15.063 [2024-07-15 12:49:33.190023] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.063 [2024-07-15 12:49:33.190065] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff9a74e2000 00:10:15.063 [2024-07-15 12:49:33.191031] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.192043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.193047] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.194063] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.195070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.196072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.197060] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.198065] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:10:15.063 [2024-07-15 12:49:33.199081] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:10:15.063 [2024-07-15 12:49:33.199117] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff9a74d7000 00:10:15.063 [2024-07-15 12:49:33.200232] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.063 [2024-07-15 12:49:33.218020] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:10:15.063 [2024-07-15 12:49:33.218070] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:10:15.063 [2024-07-15 12:49:33.220167] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.063 [2024-07-15 12:49:33.220223] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:10:15.063 [2024-07-15 12:49:33.220312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:10:15.063 [2024-07-15 12:49:33.220334] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:10:15.063 [2024-07-15 12:49:33.220344] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:10:15.063 [2024-07-15 12:49:33.221181] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:10:15.063 [2024-07-15 12:49:33.221202] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:10:15.063 [2024-07-15 12:49:33.221215] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:10:15.063 [2024-07-15 12:49:33.222202] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:10:15.063 [2024-07-15 12:49:33.222223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:10:15.063 [2024-07-15 12:49:33.222243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.223187] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:10:15.063 [2024-07-15 12:49:33.223209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.224193] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:10:15.063 [2024-07-15 12:49:33.224214] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:10:15.063 [2024-07-15 12:49:33.224223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.224234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.224343] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:10:15.063 [2024-07-15 12:49:33.224351] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.224359] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:10:15.063 [2024-07-15 12:49:33.225207] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:10:15.063 [2024-07-15 12:49:33.226212] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:10:15.063 [2024-07-15 12:49:33.227220] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.063 [2024-07-15 12:49:33.228213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.063 [2024-07-15 12:49:33.228289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:10:15.063 [2024-07-15 12:49:33.229236] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:10:15.063 [2024-07-15 12:49:33.229258] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:10:15.063 [2024-07-15 12:49:33.229268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:10:15.063 [2024-07-15 12:49:33.229292] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:10:15.063 [2024-07-15 12:49:33.229305] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:10:15.063 [2024-07-15 12:49:33.229325] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.063 [2024-07-15 12:49:33.229334] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.064 [2024-07-15 12:49:33.229354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.064 [2024-07-15 12:49:33.235754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:10:15.064 [2024-07-15 12:49:33.235778] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:10:15.064 [2024-07-15 12:49:33.235795] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:10:15.064 [2024-07-15 12:49:33.235803] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:10:15.064 [2024-07-15 12:49:33.235811] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:10:15.064 [2024-07-15 12:49:33.235819] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:10:15.064 [2024-07-15 12:49:33.235827] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:10:15.064 [2024-07-15 12:49:33.235835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.235849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.235865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:10:15.064 [2024-07-15 12:49:33.243749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:10:15.064 [2024-07-15 12:49:33.243778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.064 [2024-07-15 12:49:33.243793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.064 [2024-07-15 12:49:33.243805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.064 [2024-07-15 12:49:33.243817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:15.064 [2024-07-15 12:49:33.243825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.243841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.243856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:10:15.064 [2024-07-15 12:49:33.251747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:10:15.064 [2024-07-15 12:49:33.251766] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:10:15.064 [2024-07-15 12:49:33.251776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.251788] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.251799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.251812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.064 [2024-07-15 12:49:33.259749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:10:15.064 [2024-07-15 12:49:33.259821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.259841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.259856] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:10:15.064 [2024-07-15 12:49:33.259864] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:10:15.064 [2024-07-15 12:49:33.259874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:10:15.064 [2024-07-15 12:49:33.267750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:10:15.064 [2024-07-15 12:49:33.267775] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:10:15.064 [2024-07-15 12:49:33.267796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.267813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:10:15.064 [2024-07-15 12:49:33.267826] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.064 [2024-07-15 12:49:33.267835] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.064 [2024-07-15 12:49:33.267845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.321 [2024-07-15 12:49:33.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.275793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.275810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.275823] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:10:15.322 [2024-07-15 12:49:33.275832] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.322 [2024-07-15 12:49:33.275842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.283750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.283772] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283784] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283836] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:10:15.322 [2024-07-15 12:49:33.283844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:10:15.322 [2024-07-15 12:49:33.283856] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:10:15.322 [2024-07-15 12:49:33.283883] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.291764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.291791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.299762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.299788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.307763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.307788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.315750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.315785] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:10:15.322 [2024-07-15 12:49:33.315797] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:10:15.322 [2024-07-15 12:49:33.315804] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:10:15.322 [2024-07-15 12:49:33.315809] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:10:15.322 [2024-07-15 12:49:33.315819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:10:15.322 [2024-07-15 12:49:33.315831] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:10:15.322 [2024-07-15 12:49:33.315839] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:10:15.322 [2024-07-15 12:49:33.315848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.315859] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:10:15.322 [2024-07-15 12:49:33.315867] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:10:15.322 [2024-07-15 12:49:33.315875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.315887] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:10:15.322 [2024-07-15 12:49:33.315895] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:10:15.322 [2024-07-15 12:49:33.315904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:10:15.322 [2024-07-15 12:49:33.323756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.323785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.323803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:10:15.322 [2024-07-15 12:49:33.323815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:10:15.322 ===================================================== 00:10:15.322 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.322 ===================================================== 00:10:15.322 Controller Capabilities/Features 00:10:15.322 ================================ 00:10:15.322 Vendor ID: 4e58 00:10:15.322 Subsystem Vendor ID: 4e58 00:10:15.322 Serial Number: SPDK2 00:10:15.322 Model Number: SPDK bdev Controller 00:10:15.322 Firmware Version: 24.09 00:10:15.322 Recommended Arb Burst: 6 00:10:15.322 IEEE OUI Identifier: 8d 6b 50 00:10:15.322 Multi-path I/O 00:10:15.322 May have multiple subsystem ports: Yes 00:10:15.322 May have multiple controllers: Yes 00:10:15.322 Associated with SR-IOV VF: No 00:10:15.322 Max Data Transfer Size: 131072 00:10:15.322 Max Number of Namespaces: 32 00:10:15.322 Max Number of I/O Queues: 127 00:10:15.322 NVMe Specification Version (VS): 1.3 00:10:15.322 NVMe Specification Version (Identify): 1.3 00:10:15.322 Maximum Queue Entries: 256 00:10:15.322 Contiguous Queues Required: Yes 00:10:15.322 Arbitration Mechanisms Supported 00:10:15.322 Weighted Round Robin: Not Supported 00:10:15.322 Vendor Specific: Not Supported 00:10:15.322 Reset Timeout: 15000 ms 00:10:15.322 Doorbell Stride: 4 bytes 00:10:15.322 NVM Subsystem Reset: Not Supported 00:10:15.322 Command Sets Supported 00:10:15.322 NVM Command Set: Supported 00:10:15.322 Boot Partition: Not Supported 00:10:15.322 Memory Page Size Minimum: 4096 bytes 00:10:15.322 Memory Page Size Maximum: 4096 bytes 00:10:15.322 Persistent Memory Region: Not Supported 00:10:15.322 Optional Asynchronous Events Supported 00:10:15.322 Namespace Attribute Notices: Supported 00:10:15.322 Firmware Activation Notices: Not Supported 00:10:15.322 ANA Change Notices: Not Supported 00:10:15.322 PLE Aggregate Log Change Notices: Not Supported 00:10:15.322 LBA Status Info Alert Notices: Not Supported 00:10:15.322 EGE Aggregate Log Change Notices: Not Supported 00:10:15.322 Normal NVM Subsystem Shutdown event: Not Supported 00:10:15.322 Zone Descriptor Change Notices: Not Supported 00:10:15.322 Discovery Log Change Notices: Not Supported 00:10:15.322 Controller Attributes 00:10:15.322 128-bit Host Identifier: Supported 00:10:15.322 Non-Operational Permissive Mode: Not Supported 00:10:15.322 NVM Sets: Not Supported 00:10:15.322 Read Recovery Levels: Not Supported 00:10:15.322 Endurance Groups: Not Supported 00:10:15.322 Predictable Latency Mode: Not Supported 00:10:15.322 Traffic Based Keep ALive: Not Supported 00:10:15.322 Namespace Granularity: Not Supported 00:10:15.322 SQ Associations: Not Supported 00:10:15.322 UUID List: Not Supported 00:10:15.322 Multi-Domain Subsystem: Not Supported 00:10:15.322 Fixed Capacity Management: Not Supported 00:10:15.322 Variable Capacity Management: Not Supported 00:10:15.322 Delete Endurance Group: Not Supported 00:10:15.322 Delete NVM Set: Not Supported 00:10:15.322 Extended LBA Formats Supported: Not Supported 00:10:15.322 Flexible Data Placement Supported: Not Supported 00:10:15.322 00:10:15.322 Controller Memory Buffer Support 00:10:15.322 ================================ 00:10:15.322 Supported: No 00:10:15.322 00:10:15.322 Persistent Memory Region Support 00:10:15.322 ================================ 00:10:15.322 Supported: No 00:10:15.322 00:10:15.322 Admin Command Set Attributes 00:10:15.322 ============================ 00:10:15.322 Security Send/Receive: Not Supported 00:10:15.322 Format NVM: Not Supported 00:10:15.322 Firmware Activate/Download: Not Supported 00:10:15.322 Namespace Management: Not Supported 00:10:15.322 Device Self-Test: Not Supported 00:10:15.322 Directives: Not Supported 00:10:15.322 NVMe-MI: Not Supported 00:10:15.322 Virtualization Management: Not Supported 00:10:15.322 Doorbell Buffer Config: Not Supported 00:10:15.322 Get LBA Status Capability: Not Supported 00:10:15.322 Command & Feature Lockdown Capability: Not Supported 00:10:15.322 Abort Command Limit: 4 00:10:15.322 Async Event Request Limit: 4 00:10:15.322 Number of Firmware Slots: N/A 00:10:15.322 Firmware Slot 1 Read-Only: N/A 00:10:15.322 Firmware Activation Without Reset: N/A 00:10:15.322 Multiple Update Detection Support: N/A 00:10:15.322 Firmware Update Granularity: No Information Provided 00:10:15.322 Per-Namespace SMART Log: No 00:10:15.322 Asymmetric Namespace Access Log Page: Not Supported 00:10:15.322 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:10:15.322 Command Effects Log Page: Supported 00:10:15.322 Get Log Page Extended Data: Supported 00:10:15.322 Telemetry Log Pages: Not Supported 00:10:15.322 Persistent Event Log Pages: Not Supported 00:10:15.322 Supported Log Pages Log Page: May Support 00:10:15.323 Commands Supported & Effects Log Page: Not Supported 00:10:15.323 Feature Identifiers & Effects Log Page:May Support 00:10:15.323 NVMe-MI Commands & Effects Log Page: May Support 00:10:15.323 Data Area 4 for Telemetry Log: Not Supported 00:10:15.323 Error Log Page Entries Supported: 128 00:10:15.323 Keep Alive: Supported 00:10:15.323 Keep Alive Granularity: 10000 ms 00:10:15.323 00:10:15.323 NVM Command Set Attributes 00:10:15.323 ========================== 00:10:15.323 Submission Queue Entry Size 00:10:15.323 Max: 64 00:10:15.323 Min: 64 00:10:15.323 Completion Queue Entry Size 00:10:15.323 Max: 16 00:10:15.323 Min: 16 00:10:15.323 Number of Namespaces: 32 00:10:15.323 Compare Command: Supported 00:10:15.323 Write Uncorrectable Command: Not Supported 00:10:15.323 Dataset Management Command: Supported 00:10:15.323 Write Zeroes Command: Supported 00:10:15.323 Set Features Save Field: Not Supported 00:10:15.323 Reservations: Not Supported 00:10:15.323 Timestamp: Not Supported 00:10:15.323 Copy: Supported 00:10:15.323 Volatile Write Cache: Present 00:10:15.323 Atomic Write Unit (Normal): 1 00:10:15.323 Atomic Write Unit (PFail): 1 00:10:15.323 Atomic Compare & Write Unit: 1 00:10:15.323 Fused Compare & Write: Supported 00:10:15.323 Scatter-Gather List 00:10:15.323 SGL Command Set: Supported (Dword aligned) 00:10:15.323 SGL Keyed: Not Supported 00:10:15.323 SGL Bit Bucket Descriptor: Not Supported 00:10:15.323 SGL Metadata Pointer: Not Supported 00:10:15.323 Oversized SGL: Not Supported 00:10:15.323 SGL Metadata Address: Not Supported 00:10:15.323 SGL Offset: Not Supported 00:10:15.323 Transport SGL Data Block: Not Supported 00:10:15.323 Replay Protected Memory Block: Not Supported 00:10:15.323 00:10:15.323 Firmware Slot Information 00:10:15.323 ========================= 00:10:15.323 Active slot: 1 00:10:15.323 Slot 1 Firmware Revision: 24.09 00:10:15.323 00:10:15.323 00:10:15.323 Commands Supported and Effects 00:10:15.323 ============================== 00:10:15.323 Admin Commands 00:10:15.323 -------------- 00:10:15.323 Get Log Page (02h): Supported 00:10:15.323 Identify (06h): Supported 00:10:15.323 Abort (08h): Supported 00:10:15.323 Set Features (09h): Supported 00:10:15.323 Get Features (0Ah): Supported 00:10:15.323 Asynchronous Event Request (0Ch): Supported 00:10:15.323 Keep Alive (18h): Supported 00:10:15.323 I/O Commands 00:10:15.323 ------------ 00:10:15.323 Flush (00h): Supported LBA-Change 00:10:15.323 Write (01h): Supported LBA-Change 00:10:15.323 Read (02h): Supported 00:10:15.323 Compare (05h): Supported 00:10:15.323 Write Zeroes (08h): Supported LBA-Change 00:10:15.323 Dataset Management (09h): Supported LBA-Change 00:10:15.323 Copy (19h): Supported LBA-Change 00:10:15.323 00:10:15.323 Error Log 00:10:15.323 ========= 00:10:15.323 00:10:15.323 Arbitration 00:10:15.323 =========== 00:10:15.323 Arbitration Burst: 1 00:10:15.323 00:10:15.323 Power Management 00:10:15.323 ================ 00:10:15.323 Number of Power States: 1 00:10:15.323 Current Power State: Power State #0 00:10:15.323 Power State #0: 00:10:15.323 Max Power: 0.00 W 00:10:15.323 Non-Operational State: Operational 00:10:15.323 Entry Latency: Not Reported 00:10:15.323 Exit Latency: Not Reported 00:10:15.323 Relative Read Throughput: 0 00:10:15.323 Relative Read Latency: 0 00:10:15.323 Relative Write Throughput: 0 00:10:15.323 Relative Write Latency: 0 00:10:15.323 Idle Power: Not Reported 00:10:15.323 Active Power: Not Reported 00:10:15.323 Non-Operational Permissive Mode: Not Supported 00:10:15.323 00:10:15.323 Health Information 00:10:15.323 ================== 00:10:15.323 Critical Warnings: 00:10:15.323 Available Spare Space: OK 00:10:15.323 Temperature: OK 00:10:15.323 Device Reliability: OK 00:10:15.323 Read Only: No 00:10:15.323 Volatile Memory Backup: OK 00:10:15.323 Current Temperature: 0 Kelvin (-273 Celsius) 00:10:15.323 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:10:15.323 Available Spare: 0% 00:10:15.323 Available Sp[2024-07-15 12:49:33.323940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:10:15.323 [2024-07-15 12:49:33.331752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:10:15.323 [2024-07-15 12:49:33.331805] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:10:15.323 [2024-07-15 12:49:33.331823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.323 [2024-07-15 12:49:33.331834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.323 [2024-07-15 12:49:33.331844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.323 [2024-07-15 12:49:33.331854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:15.323 [2024-07-15 12:49:33.331918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:10:15.323 [2024-07-15 12:49:33.331940] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:10:15.323 [2024-07-15 12:49:33.332916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.323 [2024-07-15 12:49:33.332996] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:10:15.323 [2024-07-15 12:49:33.333016] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:10:15.323 [2024-07-15 12:49:33.333927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:10:15.323 [2024-07-15 12:49:33.333952] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:10:15.323 [2024-07-15 12:49:33.334004] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:10:15.323 [2024-07-15 12:49:33.336749] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:10:15.323 are Threshold: 0% 00:10:15.323 Life Percentage Used: 0% 00:10:15.323 Data Units Read: 0 00:10:15.323 Data Units Written: 0 00:10:15.323 Host Read Commands: 0 00:10:15.323 Host Write Commands: 0 00:10:15.323 Controller Busy Time: 0 minutes 00:10:15.323 Power Cycles: 0 00:10:15.323 Power On Hours: 0 hours 00:10:15.323 Unsafe Shutdowns: 0 00:10:15.323 Unrecoverable Media Errors: 0 00:10:15.323 Lifetime Error Log Entries: 0 00:10:15.323 Warning Temperature Time: 0 minutes 00:10:15.323 Critical Temperature Time: 0 minutes 00:10:15.323 00:10:15.323 Number of Queues 00:10:15.323 ================ 00:10:15.323 Number of I/O Submission Queues: 127 00:10:15.323 Number of I/O Completion Queues: 127 00:10:15.323 00:10:15.323 Active Namespaces 00:10:15.323 ================= 00:10:15.323 Namespace ID:1 00:10:15.323 Error Recovery Timeout: Unlimited 00:10:15.323 Command Set Identifier: NVM (00h) 00:10:15.323 Deallocate: Supported 00:10:15.323 Deallocated/Unwritten Error: Not Supported 00:10:15.323 Deallocated Read Value: Unknown 00:10:15.323 Deallocate in Write Zeroes: Not Supported 00:10:15.323 Deallocated Guard Field: 0xFFFF 00:10:15.323 Flush: Supported 00:10:15.323 Reservation: Supported 00:10:15.323 Namespace Sharing Capabilities: Multiple Controllers 00:10:15.323 Size (in LBAs): 131072 (0GiB) 00:10:15.323 Capacity (in LBAs): 131072 (0GiB) 00:10:15.323 Utilization (in LBAs): 131072 (0GiB) 00:10:15.323 NGUID: BEC968E0C5824A4391C4564ED0A8CF06 00:10:15.323 UUID: bec968e0-c582-4a43-91c4-564ed0a8cf06 00:10:15.323 Thin Provisioning: Not Supported 00:10:15.323 Per-NS Atomic Units: Yes 00:10:15.323 Atomic Boundary Size (Normal): 0 00:10:15.323 Atomic Boundary Size (PFail): 0 00:10:15.323 Atomic Boundary Offset: 0 00:10:15.323 Maximum Single Source Range Length: 65535 00:10:15.323 Maximum Copy Length: 65535 00:10:15.323 Maximum Source Range Count: 1 00:10:15.323 NGUID/EUI64 Never Reused: No 00:10:15.323 Namespace Write Protected: No 00:10:15.323 Number of LBA Formats: 1 00:10:15.323 Current LBA Format: LBA Format #00 00:10:15.323 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:15.323 00:10:15.323 12:49:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:10:15.323 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.580 [2024-07-15 12:49:33.565682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:20.835 Initializing NVMe Controllers 00:10:20.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:20.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:20.835 Initialization complete. Launching workers. 00:10:20.835 ======================================================== 00:10:20.835 Latency(us) 00:10:20.835 Device Information : IOPS MiB/s Average min max 00:10:20.835 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35020.19 136.80 3655.95 1140.89 8622.19 00:10:20.835 ======================================================== 00:10:20.835 Total : 35020.19 136.80 3655.95 1140.89 8622.19 00:10:20.835 00:10:20.835 [2024-07-15 12:49:38.674106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:20.835 12:49:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:20.835 EAL: No free 2048 kB hugepages reported on node 1 00:10:20.835 [2024-07-15 12:49:38.914831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:26.126 Initializing NVMe Controllers 00:10:26.126 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:26.126 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:26.126 Initialization complete. Launching workers. 00:10:26.126 ======================================================== 00:10:26.126 Latency(us) 00:10:26.127 Device Information : IOPS MiB/s Average min max 00:10:26.127 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32123.04 125.48 3983.92 1192.99 9990.26 00:10:26.127 ======================================================== 00:10:26.127 Total : 32123.04 125.48 3983.92 1192.99 9990.26 00:10:26.127 00:10:26.127 [2024-07-15 12:49:43.939812] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:26.127 12:49:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:26.127 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.127 [2024-07-15 12:49:44.156700] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:31.385 [2024-07-15 12:49:49.289893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:31.385 Initializing NVMe Controllers 00:10:31.385 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.385 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:31.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:31.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:31.385 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:31.385 Initialization complete. Launching workers. 00:10:31.385 Starting thread on core 2 00:10:31.385 Starting thread on core 3 00:10:31.385 Starting thread on core 1 00:10:31.385 12:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:31.385 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.643 [2024-07-15 12:49:49.598302] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.923 [2024-07-15 12:49:52.665402] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.923 Initializing NVMe Controllers 00:10:34.923 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.923 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.923 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:34.923 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:34.923 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:34.923 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:34.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:34.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:34.923 Initialization complete. Launching workers. 00:10:34.923 Starting thread on core 1 with urgent priority queue 00:10:34.923 Starting thread on core 2 with urgent priority queue 00:10:34.923 Starting thread on core 3 with urgent priority queue 00:10:34.923 Starting thread on core 0 with urgent priority queue 00:10:34.923 SPDK bdev Controller (SPDK2 ) core 0: 5012.33 IO/s 19.95 secs/100000 ios 00:10:34.923 SPDK bdev Controller (SPDK2 ) core 1: 5190.67 IO/s 19.27 secs/100000 ios 00:10:34.923 SPDK bdev Controller (SPDK2 ) core 2: 4775.33 IO/s 20.94 secs/100000 ios 00:10:34.923 SPDK bdev Controller (SPDK2 ) core 3: 5510.00 IO/s 18.15 secs/100000 ios 00:10:34.923 ======================================================== 00:10:34.923 00:10:34.924 12:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.924 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.924 [2024-07-15 12:49:52.948205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:34.924 Initializing NVMe Controllers 00:10:34.924 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.924 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:34.924 Namespace ID: 1 size: 0GB 00:10:34.924 Initialization complete. 00:10:34.924 INFO: using host memory buffer for IO 00:10:34.924 Hello world! 00:10:34.924 [2024-07-15 12:49:52.958311] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:34.924 12:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:34.924 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.193 [2024-07-15 12:49:53.243541] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.562 Initializing NVMe Controllers 00:10:36.562 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.562 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:36.562 Initialization complete. Launching workers. 00:10:36.562 submit (in ns) avg, min, max = 6828.1, 3485.6, 4016713.3 00:10:36.562 complete (in ns) avg, min, max = 29320.8, 2043.3, 4020066.7 00:10:36.562 00:10:36.562 Submit histogram 00:10:36.562 ================ 00:10:36.562 Range in us Cumulative Count 00:10:36.562 3.484 - 3.508: 0.1105% ( 15) 00:10:36.562 3.508 - 3.532: 1.0018% ( 121) 00:10:36.562 3.532 - 3.556: 2.5560% ( 211) 00:10:36.562 3.556 - 3.579: 6.6072% ( 550) 00:10:36.562 3.579 - 3.603: 13.6123% ( 951) 00:10:36.562 3.603 - 3.627: 22.3262% ( 1183) 00:10:36.562 3.627 - 3.650: 31.1874% ( 1203) 00:10:36.562 3.650 - 3.674: 39.5035% ( 1129) 00:10:36.562 3.674 - 3.698: 46.8032% ( 991) 00:10:36.562 3.698 - 3.721: 54.3606% ( 1026) 00:10:36.562 3.721 - 3.745: 59.7525% ( 732) 00:10:36.562 3.745 - 3.769: 64.2899% ( 616) 00:10:36.562 3.769 - 3.793: 68.1791% ( 528) 00:10:36.562 3.793 - 3.816: 71.9210% ( 508) 00:10:36.562 3.816 - 3.840: 75.2357% ( 450) 00:10:36.562 3.840 - 3.864: 78.6682% ( 466) 00:10:36.562 3.864 - 3.887: 81.7840% ( 423) 00:10:36.562 3.887 - 3.911: 84.6273% ( 386) 00:10:36.562 3.911 - 3.935: 87.3159% ( 365) 00:10:36.562 3.935 - 3.959: 89.2384% ( 261) 00:10:36.562 3.959 - 3.982: 90.9915% ( 238) 00:10:36.562 3.982 - 4.006: 92.3689% ( 187) 00:10:36.562 4.006 - 4.030: 93.6358% ( 172) 00:10:36.562 4.030 - 4.053: 94.4829% ( 115) 00:10:36.562 4.053 - 4.077: 95.1311% ( 88) 00:10:36.562 4.077 - 4.101: 95.7499% ( 84) 00:10:36.562 4.101 - 4.124: 96.3539% ( 82) 00:10:36.562 4.124 - 4.148: 96.7737% ( 57) 00:10:36.562 4.148 - 4.172: 97.0978% ( 44) 00:10:36.562 4.172 - 4.196: 97.2746% ( 24) 00:10:36.562 4.196 - 4.219: 97.3483% ( 10) 00:10:36.562 4.219 - 4.243: 97.4146% ( 9) 00:10:36.562 4.243 - 4.267: 97.5324% ( 16) 00:10:36.562 4.267 - 4.290: 97.6061% ( 10) 00:10:36.562 4.290 - 4.314: 97.6797% ( 10) 00:10:36.562 4.314 - 4.338: 97.7534% ( 10) 00:10:36.562 4.338 - 4.361: 97.8123% ( 8) 00:10:36.562 4.361 - 4.385: 97.8639% ( 7) 00:10:36.562 4.385 - 4.409: 97.9154% ( 7) 00:10:36.562 4.409 - 4.433: 97.9228% ( 1) 00:10:36.562 4.433 - 4.456: 97.9302% ( 1) 00:10:36.562 4.456 - 4.480: 97.9449% ( 2) 00:10:36.562 4.480 - 4.504: 97.9523% ( 1) 00:10:36.562 4.599 - 4.622: 97.9596% ( 1) 00:10:36.562 4.741 - 4.764: 97.9744% ( 2) 00:10:36.562 4.788 - 4.812: 97.9817% ( 1) 00:10:36.562 4.812 - 4.836: 97.9965% ( 2) 00:10:36.562 4.836 - 4.859: 98.0186% ( 3) 00:10:36.562 4.859 - 4.883: 98.0480% ( 4) 00:10:36.562 4.883 - 4.907: 98.0996% ( 7) 00:10:36.562 4.907 - 4.930: 98.1217% ( 3) 00:10:36.562 4.930 - 4.954: 98.1438% ( 3) 00:10:36.562 4.954 - 4.978: 98.1806% ( 5) 00:10:36.562 4.978 - 5.001: 98.2322% ( 7) 00:10:36.562 5.001 - 5.025: 98.2985% ( 9) 00:10:36.562 5.025 - 5.049: 98.3500% ( 7) 00:10:36.562 5.049 - 5.073: 98.3869% ( 5) 00:10:36.562 5.073 - 5.096: 98.4311% ( 6) 00:10:36.562 5.096 - 5.120: 98.4679% ( 5) 00:10:36.562 5.120 - 5.144: 98.5268% ( 8) 00:10:36.562 5.144 - 5.167: 98.5342% ( 1) 00:10:36.562 5.167 - 5.191: 98.5710% ( 5) 00:10:36.562 5.191 - 5.215: 98.5784% ( 1) 00:10:36.562 5.215 - 5.239: 98.6078% ( 4) 00:10:36.562 5.239 - 5.262: 98.6299% ( 3) 00:10:36.562 5.262 - 5.286: 98.6668% ( 5) 00:10:36.562 5.286 - 5.310: 98.7036% ( 5) 00:10:36.562 5.310 - 5.333: 98.7183% ( 2) 00:10:36.562 5.333 - 5.357: 98.7478% ( 4) 00:10:36.562 5.357 - 5.381: 98.7773% ( 4) 00:10:36.562 5.381 - 5.404: 98.7920% ( 2) 00:10:36.562 5.404 - 5.428: 98.8067% ( 2) 00:10:36.562 5.452 - 5.476: 98.8141% ( 1) 00:10:36.562 5.618 - 5.641: 98.8214% ( 1) 00:10:36.562 5.665 - 5.689: 98.8288% ( 1) 00:10:36.562 5.736 - 5.760: 98.8362% ( 1) 00:10:36.562 5.855 - 5.879: 98.8435% ( 1) 00:10:36.562 5.950 - 5.973: 98.8509% ( 1) 00:10:36.562 6.827 - 6.874: 98.8583% ( 1) 00:10:36.562 6.874 - 6.921: 98.8656% ( 1) 00:10:36.562 7.064 - 7.111: 98.8804% ( 2) 00:10:36.562 7.301 - 7.348: 98.8877% ( 1) 00:10:36.562 7.348 - 7.396: 98.9025% ( 2) 00:10:36.562 7.443 - 7.490: 98.9098% ( 1) 00:10:36.562 7.538 - 7.585: 98.9172% ( 1) 00:10:36.562 7.680 - 7.727: 98.9246% ( 1) 00:10:36.562 7.870 - 7.917: 98.9319% ( 1) 00:10:36.562 8.059 - 8.107: 98.9393% ( 1) 00:10:36.562 8.107 - 8.154: 98.9467% ( 1) 00:10:36.562 8.154 - 8.201: 98.9540% ( 1) 00:10:36.562 8.201 - 8.249: 98.9688% ( 2) 00:10:36.562 8.249 - 8.296: 98.9761% ( 1) 00:10:36.562 8.296 - 8.344: 98.9835% ( 1) 00:10:36.562 8.486 - 8.533: 98.9982% ( 2) 00:10:36.562 8.581 - 8.628: 99.0056% ( 1) 00:10:36.562 8.770 - 8.818: 99.0130% ( 1) 00:10:36.562 8.818 - 8.865: 99.0277% ( 2) 00:10:36.562 8.913 - 8.960: 99.0351% ( 1) 00:10:36.562 9.007 - 9.055: 99.0498% ( 2) 00:10:36.562 9.055 - 9.102: 99.0572% ( 1) 00:10:36.562 9.102 - 9.150: 99.0645% ( 1) 00:10:36.562 9.529 - 9.576: 99.0719% ( 1) 00:10:36.562 9.624 - 9.671: 99.0793% ( 1) 00:10:36.562 10.335 - 10.382: 99.0866% ( 1) 00:10:36.562 10.382 - 10.430: 99.0940% ( 1) 00:10:36.562 10.951 - 10.999: 99.1014% ( 1) 00:10:36.562 10.999 - 11.046: 99.1087% ( 1) 00:10:36.562 11.330 - 11.378: 99.1235% ( 2) 00:10:36.562 11.378 - 11.425: 99.1308% ( 1) 00:10:36.562 11.757 - 11.804: 99.1382% ( 1) 00:10:36.562 12.231 - 12.326: 99.1529% ( 2) 00:10:36.562 12.421 - 12.516: 99.1603% ( 1) 00:10:36.562 14.317 - 14.412: 99.1676% ( 1) 00:10:36.562 17.067 - 17.161: 99.1750% ( 1) 00:10:36.562 17.256 - 17.351: 99.1824% ( 1) 00:10:36.562 17.351 - 17.446: 99.1897% ( 1) 00:10:36.562 17.446 - 17.541: 99.2118% ( 3) 00:10:36.562 17.541 - 17.636: 99.2487% ( 5) 00:10:36.562 17.636 - 17.730: 99.3150% ( 9) 00:10:36.562 17.730 - 17.825: 99.3665% ( 7) 00:10:36.562 17.825 - 17.920: 99.4328% ( 9) 00:10:36.562 17.920 - 18.015: 99.5138% ( 11) 00:10:36.562 18.015 - 18.110: 99.5433% ( 4) 00:10:36.562 18.110 - 18.204: 99.5949% ( 7) 00:10:36.562 18.204 - 18.299: 99.6391% ( 6) 00:10:36.562 18.299 - 18.394: 99.6612% ( 3) 00:10:36.562 18.394 - 18.489: 99.6906% ( 4) 00:10:36.562 18.489 - 18.584: 99.7275% ( 5) 00:10:36.562 18.584 - 18.679: 99.7790% ( 7) 00:10:36.562 18.679 - 18.773: 99.8306% ( 7) 00:10:36.562 18.773 - 18.868: 99.8674% ( 5) 00:10:36.562 18.868 - 18.963: 99.8821% ( 2) 00:10:36.562 19.153 - 19.247: 99.8895% ( 1) 00:10:36.562 19.342 - 19.437: 99.8969% ( 1) 00:10:36.562 19.437 - 19.532: 99.9042% ( 1) 00:10:36.562 19.532 - 19.627: 99.9116% ( 1) 00:10:36.562 24.273 - 24.462: 99.9190% ( 1) 00:10:36.562 24.841 - 25.031: 99.9263% ( 1) 00:10:36.562 3980.705 - 4004.978: 99.9853% ( 8) 00:10:36.562 4004.978 - 4029.250: 100.0000% ( 2) 00:10:36.562 00:10:36.562 Complete histogram 00:10:36.562 ================== 00:10:36.562 Range in us Cumulative Count 00:10:36.562 2.039 - 2.050: 3.8671% ( 525) 00:10:36.562 2.050 - 2.062: 30.2151% ( 3577) 00:10:36.562 2.062 - 2.074: 34.7378% ( 614) 00:10:36.562 2.074 - 2.086: 46.2876% ( 1568) 00:10:36.562 2.086 - 2.098: 58.7728% ( 1695) 00:10:36.562 2.098 - 2.110: 61.5866% ( 382) 00:10:36.562 2.110 - 2.121: 69.0704% ( 1016) 00:10:36.562 2.121 - 2.133: 76.5174% ( 1011) 00:10:36.562 2.133 - 2.145: 77.9243% ( 191) 00:10:36.562 2.145 - 2.157: 84.5094% ( 894) 00:10:36.562 2.157 - 2.169: 88.1335% ( 492) 00:10:36.562 2.169 - 2.181: 88.9585% ( 112) 00:10:36.562 2.181 - 2.193: 90.3948% ( 195) 00:10:36.562 2.193 - 2.204: 92.0963% ( 231) 00:10:36.562 2.204 - 2.216: 93.6800% ( 215) 00:10:36.562 2.216 - 2.228: 94.6818% ( 136) 00:10:36.562 2.228 - 2.240: 95.2195% ( 73) 00:10:36.562 2.240 - 2.252: 95.4331% ( 29) 00:10:36.562 2.252 - 2.264: 95.6246% ( 26) 00:10:36.562 2.264 - 2.276: 95.8824% ( 35) 00:10:36.562 2.276 - 2.287: 96.1476% ( 36) 00:10:36.562 2.287 - 2.299: 96.1771% ( 4) 00:10:36.562 2.299 - 2.311: 96.2139% ( 5) 00:10:36.562 2.311 - 2.323: 96.2434% ( 4) 00:10:36.562 2.323 - 2.335: 96.3023% ( 8) 00:10:36.562 2.335 - 2.347: 96.4717% ( 23) 00:10:36.562 2.347 - 2.359: 96.7885% ( 43) 00:10:36.562 2.359 - 2.370: 97.1494% ( 49) 00:10:36.562 2.370 - 2.382: 97.4146% ( 36) 00:10:36.562 2.382 - 2.394: 97.6208% ( 28) 00:10:36.562 2.394 - 2.406: 97.8565% ( 32) 00:10:36.562 2.406 - 2.418: 97.9965% ( 19) 00:10:36.562 2.418 - 2.430: 98.0996% ( 14) 00:10:36.562 2.430 - 2.441: 98.1585% ( 8) 00:10:36.562 2.441 - 2.453: 98.2322% ( 10) 00:10:36.562 2.453 - 2.465: 98.2837% ( 7) 00:10:36.562 2.465 - 2.477: 98.3206% ( 5) 00:10:36.562 2.477 - 2.489: 98.3500% ( 4) 00:10:36.562 2.489 - 2.501: 98.3721% ( 3) 00:10:36.562 2.501 - 2.513: 98.4016% ( 4) 00:10:36.562 2.513 - 2.524: 98.4311% ( 4) 00:10:36.562 2.524 - 2.536: 98.4679% ( 5) 00:10:36.562 2.536 - 2.548: 98.4753% ( 1) 00:10:36.562 2.548 - 2.560: 98.4900% ( 2) 00:10:36.562 2.596 - 2.607: 98.4973% ( 1) 00:10:36.562 2.667 - 2.679: 98.5047% ( 1) 00:10:36.562 2.726 - 2.738: 98.5121% ( 1) 00:10:36.562 2.750 - 2.761: 98.5194% ( 1) 00:10:36.562 2.797 - 2.809: 98.5268% ( 1) 00:10:36.562 2.856 - 2.868: 98.5342% ( 1) 00:10:36.562 3.058 - 3.081: 98.5415% ( 1) 00:10:36.562 3.461 - 3.484: 98.5489% ( 1) 00:10:36.562 3.532 - 3.556: 98.5563% ( 1) 00:10:36.562 3.556 - 3.579: 98.5784% ( 3) 00:10:36.562 3.603 - 3.627: 98.5857% ( 1) 00:10:36.562 3.627 - 3.650: 98.5931% ( 1) 00:10:36.562 3.674 - 3.698: 98.6078% ( 2) 00:10:36.562 3.745 - 3.769: 98.6152% ( 1) 00:10:36.562 3.769 - 3.793: 98.6299% ( 2) 00:10:36.562 3.840 - 3.864: 98.6373% ( 1) 00:10:36.562 3.864 - 3.887: 98.6447% ( 1) 00:10:36.562 4.006 - 4.030: 98.6520% ( 1) 00:10:36.562 4.196 - 4.219: 98.6594% ( 1) 00:10:36.562 4.243 - 4.267: 98.6668% ( 1) 00:10:36.562 4.575 - 4.599: 98.6741% ( 1) 00:10:36.562 4.930 - 4.954: 98.6815% ( 1) 00:10:36.562 5.499 - 5.523: 98.6889% ( 1) 00:10:36.562 5.547 - 5.570: 98.6962% ( 1) 00:10:36.562 5.713 - 5.736: 98.7036% ( 1) 00:10:36.562 5.807 - 5.831: 98.7110% ( 1) 00:10:36.562 5.855 - 5.879: 98.7257% ( 2) 00:10:36.562 6.305 - 6.353: 98.7331% ( 1) 00:10:36.562 6.353 - 6.400: 98.7404% ( 1) 00:10:36.562 6.637 - 6.684: 98.7478% ( 1) 00:10:36.562 7.064 - 7.111: 98.7552% ( 1) 00:10:36.562 7.159 - 7.206: 98.7699% ( 2) 00:10:36.562 7.348 - 7.396: 98.7773% ( 1) 00:10:36.562 7.396 - 7.443: 98.7846% ( 1) 00:10:36.562 7.443 - 7.490: 98.7994% ( 2) 00:10:36.562 7.870 - 7.917: 98.8067% ( 1) 00:10:36.562 8.201 - 8.249: 98.8141% ( 1) 00:10:36.562 8.865 - 8.913: 98.8214% ( 1) 00:10:36.562 9.671 - 9.719: 98.8288% ( 1) 00:10:36.562 15.360 - 15.455: 98.8362% ( 1) 00:10:36.562 15.550 - 15.644: 98.8509% ( 2) 00:10:36.562 15.644 - 15.739: 98.8877% ( 5) 00:10:36.562 15.739 - 15.834: 98.9098% ( 3) 00:10:36.562 15.834 - 15.929: 98.9246% ( 2) 00:10:36.562 15.929 - 16.024: 98.9467% ( 3) 00:10:36.562 16.024 - 16.119: 98.9688% ( 3) 00:10:36.562 16.119 - 16.213: 99.0056% ( 5) 00:10:36.562 16.213 - 16.308: 99.0424% ( 5) 00:10:36.562 16.308 - 16.403: 99.0719% ( 4) 00:10:36.562 16.403 - 16.498: 99.1014% ( 4) 00:10:36.562 16.498 - 16.593: 99.1161% ( 2) 00:10:36.562 16.593 - 16.687: 99.1382% ( 3) 00:10:36.562 16.687 - 16.782: 99.1676% ( 4) 00:10:36.562 16.782 - 16.877: 99.1824% ( 2) 00:10:36.562 16.877 - 16.972: 99.2045% ( 3) 00:10:36.562 16.972 - 17.067: 99.2266% ( 3) 00:10:36.562 17.067 - 17.161: 99.2413% ( 2) 00:10:36.562 17.161 - 17.256: 99.2487% ( 1) 00:10:36.562 17.825 - 17.920: 99.2560% ( 1) 00:10:36.562 17.920 - 18.015: 99.2634% ( 1) 00:10:36.562 18.394 - 18.489: 99.2708% ( 1) 00:10:36.562 18.489 - 18.584: 99.2781% ( 1) 00:10:36.562 18.773 - 18.868: 99.2855% ( 1) 00:10:36.562 19.058 - 19.153: 99.2929% ( 1) 00:10:36.562 20.385 - 20.480: 99.3002% ( 1) 00:10:36.562 22.945 - 23.040: 99.3076% ( 1) 00:10:36.562 25.600 - 25.790: 99.3150% ( 1) 00:10:36.562 27.686 - 27.876: 9[2024-07-15 12:49:54.342634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:36.562 9.3223% ( 1) 00:10:36.562 3980.705 - 4004.978: 99.7569% ( 59) 00:10:36.562 4004.978 - 4029.250: 100.0000% ( 33) 00:10:36.562 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:36.562 [ 00:10:36.562 { 00:10:36.562 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:36.562 "subtype": "Discovery", 00:10:36.562 "listen_addresses": [], 00:10:36.562 "allow_any_host": true, 00:10:36.562 "hosts": [] 00:10:36.562 }, 00:10:36.562 { 00:10:36.562 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:36.562 "subtype": "NVMe", 00:10:36.562 "listen_addresses": [ 00:10:36.562 { 00:10:36.562 "trtype": "VFIOUSER", 00:10:36.562 "adrfam": "IPv4", 00:10:36.562 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:36.562 "trsvcid": "0" 00:10:36.562 } 00:10:36.562 ], 00:10:36.562 "allow_any_host": true, 00:10:36.562 "hosts": [], 00:10:36.562 "serial_number": "SPDK1", 00:10:36.562 "model_number": "SPDK bdev Controller", 00:10:36.562 "max_namespaces": 32, 00:10:36.562 "min_cntlid": 1, 00:10:36.562 "max_cntlid": 65519, 00:10:36.562 "namespaces": [ 00:10:36.562 { 00:10:36.562 "nsid": 1, 00:10:36.562 "bdev_name": "Malloc1", 00:10:36.562 "name": "Malloc1", 00:10:36.562 "nguid": "0B7A24976F914CC58EDD25EE916B8B8D", 00:10:36.562 "uuid": "0b7a2497-6f91-4cc5-8edd-25ee916b8b8d" 00:10:36.562 }, 00:10:36.562 { 00:10:36.562 "nsid": 2, 00:10:36.562 "bdev_name": "Malloc3", 00:10:36.562 "name": "Malloc3", 00:10:36.562 "nguid": "F2C4239A23F14EDB862E26313BF939FE", 00:10:36.562 "uuid": "f2c4239a-23f1-4edb-862e-26313bf939fe" 00:10:36.562 } 00:10:36.562 ] 00:10:36.562 }, 00:10:36.562 { 00:10:36.562 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:36.562 "subtype": "NVMe", 00:10:36.562 "listen_addresses": [ 00:10:36.562 { 00:10:36.562 "trtype": "VFIOUSER", 00:10:36.562 "adrfam": "IPv4", 00:10:36.562 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:36.562 "trsvcid": "0" 00:10:36.562 } 00:10:36.562 ], 00:10:36.562 "allow_any_host": true, 00:10:36.562 "hosts": [], 00:10:36.562 "serial_number": "SPDK2", 00:10:36.562 "model_number": "SPDK bdev Controller", 00:10:36.562 "max_namespaces": 32, 00:10:36.562 "min_cntlid": 1, 00:10:36.562 "max_cntlid": 65519, 00:10:36.562 "namespaces": [ 00:10:36.562 { 00:10:36.562 "nsid": 1, 00:10:36.562 "bdev_name": "Malloc2", 00:10:36.562 "name": "Malloc2", 00:10:36.562 "nguid": "BEC968E0C5824A4391C4564ED0A8CF06", 00:10:36.562 "uuid": "bec968e0-c582-4a43-91c4-564ed0a8cf06" 00:10:36.562 } 00:10:36.562 ] 00:10:36.562 } 00:10:36.562 ] 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3340984 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:36.562 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:36.562 EAL: No free 2048 kB hugepages reported on node 1 00:10:36.819 [2024-07-15 12:49:54.804234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:36.819 Malloc4 00:10:36.819 12:49:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:37.075 [2024-07-15 12:49:55.206187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:37.075 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:37.075 Asynchronous Event Request test 00:10:37.075 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.075 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:37.075 Registering asynchronous event callbacks... 00:10:37.075 Starting namespace attribute notice tests for all controllers... 00:10:37.075 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:37.075 aer_cb - Changed Namespace 00:10:37.075 Cleaning up... 00:10:37.332 [ 00:10:37.332 { 00:10:37.332 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.332 "subtype": "Discovery", 00:10:37.332 "listen_addresses": [], 00:10:37.332 "allow_any_host": true, 00:10:37.332 "hosts": [] 00:10:37.332 }, 00:10:37.332 { 00:10:37.332 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:37.332 "subtype": "NVMe", 00:10:37.332 "listen_addresses": [ 00:10:37.332 { 00:10:37.332 "trtype": "VFIOUSER", 00:10:37.332 "adrfam": "IPv4", 00:10:37.332 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:37.332 "trsvcid": "0" 00:10:37.332 } 00:10:37.332 ], 00:10:37.332 "allow_any_host": true, 00:10:37.332 "hosts": [], 00:10:37.332 "serial_number": "SPDK1", 00:10:37.332 "model_number": "SPDK bdev Controller", 00:10:37.332 "max_namespaces": 32, 00:10:37.332 "min_cntlid": 1, 00:10:37.332 "max_cntlid": 65519, 00:10:37.332 "namespaces": [ 00:10:37.332 { 00:10:37.332 "nsid": 1, 00:10:37.332 "bdev_name": "Malloc1", 00:10:37.332 "name": "Malloc1", 00:10:37.332 "nguid": "0B7A24976F914CC58EDD25EE916B8B8D", 00:10:37.332 "uuid": "0b7a2497-6f91-4cc5-8edd-25ee916b8b8d" 00:10:37.332 }, 00:10:37.332 { 00:10:37.332 "nsid": 2, 00:10:37.332 "bdev_name": "Malloc3", 00:10:37.332 "name": "Malloc3", 00:10:37.332 "nguid": "F2C4239A23F14EDB862E26313BF939FE", 00:10:37.332 "uuid": "f2c4239a-23f1-4edb-862e-26313bf939fe" 00:10:37.332 } 00:10:37.332 ] 00:10:37.332 }, 00:10:37.332 { 00:10:37.332 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:37.332 "subtype": "NVMe", 00:10:37.332 "listen_addresses": [ 00:10:37.332 { 00:10:37.332 "trtype": "VFIOUSER", 00:10:37.332 "adrfam": "IPv4", 00:10:37.332 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:37.332 "trsvcid": "0" 00:10:37.332 } 00:10:37.332 ], 00:10:37.332 "allow_any_host": true, 00:10:37.332 "hosts": [], 00:10:37.332 "serial_number": "SPDK2", 00:10:37.332 "model_number": "SPDK bdev Controller", 00:10:37.332 "max_namespaces": 32, 00:10:37.332 "min_cntlid": 1, 00:10:37.332 "max_cntlid": 65519, 00:10:37.332 "namespaces": [ 00:10:37.332 { 00:10:37.332 "nsid": 1, 00:10:37.332 "bdev_name": "Malloc2", 00:10:37.332 "name": "Malloc2", 00:10:37.332 "nguid": "BEC968E0C5824A4391C4564ED0A8CF06", 00:10:37.332 "uuid": "bec968e0-c582-4a43-91c4-564ed0a8cf06" 00:10:37.332 }, 00:10:37.332 { 00:10:37.332 "nsid": 2, 00:10:37.332 "bdev_name": "Malloc4", 00:10:37.332 "name": "Malloc4", 00:10:37.332 "nguid": "AED6674FDA324A57B11D1D2E1EE20BD3", 00:10:37.332 "uuid": "aed6674f-da32-4a57-b11d-1d2e1ee20bd3" 00:10:37.332 } 00:10:37.332 ] 00:10:37.332 } 00:10:37.332 ] 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3340984 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3335376 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3335376 ']' 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3335376 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3335376 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3335376' 00:10:37.332 killing process with pid 3335376 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3335376 00:10:37.332 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3335376 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3341128 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3341128' 00:10:37.896 Process pid: 3341128 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3341128 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3341128 ']' 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.896 12:49:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:37.896 [2024-07-15 12:49:55.925401] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:37.896 [2024-07-15 12:49:55.926463] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:10:37.896 [2024-07-15 12:49:55.926531] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.896 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.896 [2024-07-15 12:49:55.983274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.896 [2024-07-15 12:49:56.082115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.896 [2024-07-15 12:49:56.082171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.896 [2024-07-15 12:49:56.082192] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.896 [2024-07-15 12:49:56.082209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.896 [2024-07-15 12:49:56.082223] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.896 [2024-07-15 12:49:56.082322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.896 [2024-07-15 12:49:56.082427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.896 [2024-07-15 12:49:56.082502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.896 [2024-07-15 12:49:56.082510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.153 [2024-07-15 12:49:56.180841] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:38.153 [2024-07-15 12:49:56.181082] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:38.153 [2024-07-15 12:49:56.181380] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:38.153 [2024-07-15 12:49:56.182051] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:38.153 [2024-07-15 12:49:56.182325] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:38.153 12:49:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:38.153 12:49:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:10:38.153 12:49:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:39.083 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:39.342 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:39.342 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:39.342 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:39.342 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:39.342 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:39.601 Malloc1 00:10:39.859 12:49:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:40.116 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:40.373 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:40.373 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:40.373 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:40.373 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:40.631 Malloc2 00:10:40.631 12:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:40.888 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:41.145 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3341128 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3341128 ']' 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3341128 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:41.402 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3341128 00:10:41.660 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:41.660 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:41.660 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3341128' 00:10:41.660 killing process with pid 3341128 00:10:41.660 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3341128 00:10:41.660 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3341128 00:10:41.918 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:41.918 12:49:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:41.918 00:10:41.918 real 0m52.694s 00:10:41.918 user 3m27.860s 00:10:41.918 sys 0m4.391s 00:10:41.918 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.918 12:49:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:41.918 ************************************ 00:10:41.918 END TEST nvmf_vfio_user 00:10:41.918 ************************************ 00:10:41.918 12:49:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:41.918 12:49:59 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:41.918 12:49:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:41.918 12:49:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.918 12:49:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:41.918 ************************************ 00:10:41.918 START TEST nvmf_vfio_user_nvme_compliance 00:10:41.918 ************************************ 00:10:41.918 12:49:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:41.918 * Looking for test storage... 00:10:41.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.918 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3341634 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3341634' 00:10:41.919 Process pid: 3341634 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3341634 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3341634 ']' 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.919 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:41.919 [2024-07-15 12:50:00.103500] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:10:41.919 [2024-07-15 12:50:00.103596] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.177 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.177 [2024-07-15 12:50:00.167818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:42.177 [2024-07-15 12:50:00.277940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.177 [2024-07-15 12:50:00.278007] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.177 [2024-07-15 12:50:00.278028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.177 [2024-07-15 12:50:00.278046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.177 [2024-07-15 12:50:00.278061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.177 [2024-07-15 12:50:00.278146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:42.177 [2024-07-15 12:50:00.278212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:42.177 [2024-07-15 12:50:00.278218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.434 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.434 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:10:42.434 12:50:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 malloc0 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.367 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.368 12:50:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:43.368 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.625 00:10:43.625 00:10:43.625 CUnit - A unit testing framework for C - Version 2.1-3 00:10:43.625 http://cunit.sourceforge.net/ 00:10:43.625 00:10:43.625 00:10:43.625 Suite: nvme_compliance 00:10:43.625 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 12:50:01.628854] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.625 [2024-07-15 12:50:01.632400] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:43.625 [2024-07-15 12:50:01.632424] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:43.625 [2024-07-15 12:50:01.632437] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:43.625 [2024-07-15 12:50:01.633895] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.625 passed 00:10:43.625 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 12:50:01.717459] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.625 [2024-07-15 12:50:01.722490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.625 passed 00:10:43.625 Test: admin_identify_ns ...[2024-07-15 12:50:01.808280] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.882 [2024-07-15 12:50:01.867755] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:43.882 [2024-07-15 12:50:01.875756] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:43.882 [2024-07-15 12:50:01.896882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.882 passed 00:10:43.882 Test: admin_get_features_mandatory_features ...[2024-07-15 12:50:01.982075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.882 [2024-07-15 12:50:01.985092] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:43.882 passed 00:10:43.882 Test: admin_get_features_optional_features ...[2024-07-15 12:50:02.069620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:43.882 [2024-07-15 12:50:02.072638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.140 passed 00:10:44.140 Test: admin_set_features_number_of_queues ...[2024-07-15 12:50:02.158947] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.140 [2024-07-15 12:50:02.263859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.140 passed 00:10:44.397 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 12:50:02.347513] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.397 [2024-07-15 12:50:02.350520] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.397 passed 00:10:44.397 Test: admin_get_log_page_with_lpo ...[2024-07-15 12:50:02.433712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.397 [2024-07-15 12:50:02.500755] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:44.397 [2024-07-15 12:50:02.513834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.397 passed 00:10:44.397 Test: fabric_property_get ...[2024-07-15 12:50:02.600498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.397 [2024-07-15 12:50:02.601806] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:44.397 [2024-07-15 12:50:02.603521] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.655 passed 00:10:44.655 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 12:50:02.684077] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.655 [2024-07-15 12:50:02.685407] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:44.655 [2024-07-15 12:50:02.687113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.655 passed 00:10:44.655 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 12:50:02.771402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.655 [2024-07-15 12:50:02.854760] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:44.913 [2024-07-15 12:50:02.870764] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:44.913 [2024-07-15 12:50:02.875868] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.913 passed 00:10:44.913 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 12:50:02.962035] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:44.913 [2024-07-15 12:50:02.963373] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:44.913 [2024-07-15 12:50:02.965069] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:44.913 passed 00:10:44.913 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 12:50:03.048205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.169 [2024-07-15 12:50:03.123753] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:45.169 [2024-07-15 12:50:03.147765] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:45.169 [2024-07-15 12:50:03.152875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.169 passed 00:10:45.169 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 12:50:03.237534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.169 [2024-07-15 12:50:03.238871] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:45.169 [2024-07-15 12:50:03.238918] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:45.169 [2024-07-15 12:50:03.240562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.169 passed 00:10:45.169 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 12:50:03.322782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.426 [2024-07-15 12:50:03.416752] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:45.426 [2024-07-15 12:50:03.424768] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:45.426 [2024-07-15 12:50:03.432750] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:45.426 [2024-07-15 12:50:03.440766] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:45.426 [2024-07-15 12:50:03.469883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.426 passed 00:10:45.426 Test: admin_create_io_sq_verify_pc ...[2024-07-15 12:50:03.554548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:45.426 [2024-07-15 12:50:03.570764] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:45.426 [2024-07-15 12:50:03.588042] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:45.426 passed 00:10:45.683 Test: admin_create_io_qp_max_qps ...[2024-07-15 12:50:03.670581] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:46.616 [2024-07-15 12:50:04.772755] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:47.180 [2024-07-15 12:50:05.161700] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.180 passed 00:10:47.180 Test: admin_create_io_sq_shared_cq ...[2024-07-15 12:50:05.245300] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:47.180 [2024-07-15 12:50:05.376744] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:47.437 [2024-07-15 12:50:05.413839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:47.437 passed 00:10:47.437 00:10:47.437 Run Summary: Type Total Ran Passed Failed Inactive 00:10:47.437 suites 1 1 n/a 0 0 00:10:47.437 tests 18 18 18 0 0 00:10:47.437 asserts 360 360 360 0 n/a 00:10:47.437 00:10:47.437 Elapsed time = 1.568 seconds 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3341634 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3341634 ']' 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3341634 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3341634 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3341634' 00:10:47.437 killing process with pid 3341634 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3341634 00:10:47.437 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3341634 00:10:47.695 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:47.695 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:47.695 00:10:47.695 real 0m5.805s 00:10:47.695 user 0m16.213s 00:10:47.695 sys 0m0.554s 00:10:47.695 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.695 12:50:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:47.696 ************************************ 00:10:47.696 END TEST nvmf_vfio_user_nvme_compliance 00:10:47.696 ************************************ 00:10:47.696 12:50:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:47.696 12:50:05 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:47.696 12:50:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.696 12:50:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.696 12:50:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:47.696 ************************************ 00:10:47.696 START TEST nvmf_vfio_user_fuzz 00:10:47.696 ************************************ 00:10:47.696 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:47.696 * Looking for test storage... 00:10:47.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.696 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.696 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:47.696 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:47.954 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3342447 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3342447' 00:10:47.955 Process pid: 3342447 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3342447 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3342447 ']' 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.955 12:50:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:48.213 12:50:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.213 12:50:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:10:48.213 12:50:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 malloc0 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:49.145 12:50:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:21.274 Fuzzing completed. Shutting down the fuzz application 00:11:21.274 00:11:21.274 Dumping successful admin opcodes: 00:11:21.274 8, 9, 10, 24, 00:11:21.274 Dumping successful io opcodes: 00:11:21.274 0, 00:11:21.274 NS: 0x200003a1ef00 I/O qp, Total commands completed: 661937, total successful commands: 2581, random_seed: 3352982656 00:11:21.274 NS: 0x200003a1ef00 admin qp, Total commands completed: 84692, total successful commands: 674, random_seed: 3103605952 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3342447 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3342447 ']' 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3342447 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3342447 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3342447' 00:11:21.274 killing process with pid 3342447 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3342447 00:11:21.274 12:50:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3342447 00:11:21.274 12:50:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:21.274 12:50:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:21.274 00:11:21.274 real 0m32.300s 00:11:21.274 user 0m30.161s 00:11:21.274 sys 0m29.764s 00:11:21.274 12:50:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.274 12:50:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:21.274 ************************************ 00:11:21.274 END TEST nvmf_vfio_user_fuzz 00:11:21.274 ************************************ 00:11:21.274 12:50:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:21.274 12:50:38 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.274 12:50:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:21.274 12:50:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.274 12:50:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:21.274 ************************************ 00:11:21.274 START TEST nvmf_host_management 00:11:21.274 ************************************ 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:21.274 * Looking for test storage... 00:11:21.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.274 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.275 12:50:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:22.210 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:22.210 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:22.210 Found net devices under 0000:84:00.0: cvl_0_0 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.210 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:22.211 Found net devices under 0000:84:00.1: cvl_0_1 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.211 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:11:22.469 00:11:22.469 --- 10.0.0.2 ping statistics --- 00:11:22.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.469 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:11:22.469 00:11:22.469 --- 10.0.0.1 ping statistics --- 00:11:22.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.469 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3347921 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3347921 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3347921 ']' 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.469 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.469 [2024-07-15 12:50:40.619259] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:22.469 [2024-07-15 12:50:40.619345] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.469 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.726 [2024-07-15 12:50:40.687566] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.726 [2024-07-15 12:50:40.800513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.726 [2024-07-15 12:50:40.800589] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.726 [2024-07-15 12:50:40.800603] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.726 [2024-07-15 12:50:40.800617] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.726 [2024-07-15 12:50:40.800627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.726 [2024-07-15 12:50:40.800715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.726 [2024-07-15 12:50:40.800762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.726 [2024-07-15 12:50:40.800862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:22.726 [2024-07-15 12:50:40.800866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.726 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.726 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:22.726 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.726 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.726 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 [2024-07-15 12:50:40.957689] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.984 12:50:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 Malloc0 00:11:22.984 [2024-07-15 12:50:41.022732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3347970 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3347970 /var/tmp/bdevperf.sock 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3347970 ']' 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:22.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:22.984 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:22.984 { 00:11:22.984 "params": { 00:11:22.985 "name": "Nvme$subsystem", 00:11:22.985 "trtype": "$TEST_TRANSPORT", 00:11:22.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:22.985 "adrfam": "ipv4", 00:11:22.985 "trsvcid": "$NVMF_PORT", 00:11:22.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:22.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:22.985 "hdgst": ${hdgst:-false}, 00:11:22.985 "ddgst": ${ddgst:-false} 00:11:22.985 }, 00:11:22.985 "method": "bdev_nvme_attach_controller" 00:11:22.985 } 00:11:22.985 EOF 00:11:22.985 )") 00:11:22.985 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:22.985 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:22.985 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:22.985 12:50:41 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:22.985 "params": { 00:11:22.985 "name": "Nvme0", 00:11:22.985 "trtype": "tcp", 00:11:22.985 "traddr": "10.0.0.2", 00:11:22.985 "adrfam": "ipv4", 00:11:22.985 "trsvcid": "4420", 00:11:22.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:22.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:22.985 "hdgst": false, 00:11:22.985 "ddgst": false 00:11:22.985 }, 00:11:22.985 "method": "bdev_nvme_attach_controller" 00:11:22.985 }' 00:11:22.985 [2024-07-15 12:50:41.094050] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:22.985 [2024-07-15 12:50:41.094151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3347970 ] 00:11:22.985 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.985 [2024-07-15 12:50:41.158933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.242 [2024-07-15 12:50:41.270784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.500 Running I/O for 10 seconds... 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.067 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 [2024-07-15 12:50:42.104176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.067 [2024-07-15 12:50:42.104661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104821] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.104995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.105007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.105042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff33a0 is same with the state(5) to be set 00:11:24.068 [2024-07-15 12:50:42.105147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.105970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.105985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.106000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.106015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.106029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.106050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.106063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.106096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.106110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.106125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.068 [2024-07-15 12:50:42.106154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.068 [2024-07-15 12:50:42.106171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.069 [2024-07-15 12:50:42.106874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.106982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.106998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:24.069 [2024-07-15 12:50:42.107040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.069 [2024-07-15 12:50:42.107159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:24.069 [2024-07-15 12:50:42.107253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:24.069 [2024-07-15 12:50:42.107269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2130a10 is same with the state(5) to be set 00:11:24.069 [2024-07-15 12:50:42.107355] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2130a10 was disconnected and freed. reset controller. 00:11:24.069 [2024-07-15 12:50:42.107427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.069 [2024-07-15 12:50:42.107450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.069 [2024-07-15 12:50:42.107481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.069 [2024-07-15 12:50:42.107510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.069 [2024-07-15 12:50:42.107542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.069 [2024-07-15 12:50:42.107555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cff540 is same with the state(5) to be set 00:11:24.069 [2024-07-15 12:50:42.108720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:24.069 task offset: 90112 on job bdev=Nvme0n1 fails 00:11:24.069 00:11:24.069 Latency(us) 00:11:24.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.070 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:24.070 Job: Nvme0n1 ended in about 0.49 seconds with error 00:11:24.070 Verification LBA range: start 0x0 length 0x400 00:11:24.070 Nvme0n1 : 0.49 1440.43 90.03 130.95 0.00 39739.78 6092.42 33787.45 00:11:24.070 =================================================================================================================== 00:11:24.070 Total : 1440.43 90.03 130.95 0.00 39739.78 6092.42 33787.45 00:11:24.070 [2024-07-15 12:50:42.110770] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:24.070 [2024-07-15 12:50:42.110809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cff540 (9): Bad file descriptor 00:11:24.070 12:50:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.070 12:50:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:24.070 [2024-07-15 12:50:42.159304] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3347970 00:11:25.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3347970) - No such process 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:25.003 { 00:11:25.003 "params": { 00:11:25.003 "name": "Nvme$subsystem", 00:11:25.003 "trtype": "$TEST_TRANSPORT", 00:11:25.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:25.003 "adrfam": "ipv4", 00:11:25.003 "trsvcid": "$NVMF_PORT", 00:11:25.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:25.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:25.003 "hdgst": ${hdgst:-false}, 00:11:25.003 "ddgst": ${ddgst:-false} 00:11:25.003 }, 00:11:25.003 "method": "bdev_nvme_attach_controller" 00:11:25.003 } 00:11:25.003 EOF 00:11:25.003 )") 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:25.003 12:50:43 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:25.003 "params": { 00:11:25.003 "name": "Nvme0", 00:11:25.003 "trtype": "tcp", 00:11:25.003 "traddr": "10.0.0.2", 00:11:25.003 "adrfam": "ipv4", 00:11:25.003 "trsvcid": "4420", 00:11:25.003 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:25.003 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:25.003 "hdgst": false, 00:11:25.003 "ddgst": false 00:11:25.003 }, 00:11:25.003 "method": "bdev_nvme_attach_controller" 00:11:25.003 }' 00:11:25.003 [2024-07-15 12:50:43.163969] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:25.003 [2024-07-15 12:50:43.164072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3348248 ] 00:11:25.003 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.260 [2024-07-15 12:50:43.224787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.260 [2024-07-15 12:50:43.337557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.517 Running I/O for 1 seconds... 00:11:26.891 00:11:26.891 Latency(us) 00:11:26.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.891 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:26.891 Verification LBA range: start 0x0 length 0x400 00:11:26.891 Nvme0n1 : 1.03 1557.65 97.35 0.00 0.00 40439.63 6699.24 34175.81 00:11:26.891 =================================================================================================================== 00:11:26.891 Total : 1557.65 97.35 0.00 0.00 40439.63 6699.24 34175.81 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:26.891 rmmod nvme_tcp 00:11:26.891 rmmod nvme_fabrics 00:11:26.891 rmmod nvme_keyring 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3347921 ']' 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3347921 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3347921 ']' 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3347921 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:26.891 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3347921 00:11:27.150 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:27.150 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:27.150 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3347921' 00:11:27.150 killing process with pid 3347921 00:11:27.150 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3347921 00:11:27.150 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3347921 00:11:27.408 [2024-07-15 12:50:45.391605] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.408 12:50:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.334 12:50:47 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.334 12:50:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:29.334 00:11:29.334 real 0m9.271s 00:11:29.334 user 0m21.808s 00:11:29.334 sys 0m2.955s 00:11:29.334 12:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.334 12:50:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.334 ************************************ 00:11:29.334 END TEST nvmf_host_management 00:11:29.334 ************************************ 00:11:29.334 12:50:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:29.334 12:50:47 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:29.334 12:50:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.334 12:50:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.334 12:50:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:29.334 ************************************ 00:11:29.335 START TEST nvmf_lvol 00:11:29.335 ************************************ 00:11:29.335 12:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:29.593 * Looking for test storage... 00:11:29.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.593 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.593 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:29.593 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.593 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:29.594 12:50:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.497 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:31.498 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:31.498 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:31.498 Found net devices under 0000:84:00.0: cvl_0_0 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:31.498 Found net devices under 0000:84:00.1: cvl_0_1 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.498 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:31.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:11:31.756 00:11:31.756 --- 10.0.0.2 ping statistics --- 00:11:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.756 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:31.756 00:11:31.756 --- 10.0.0.1 ping statistics --- 00:11:31.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.756 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.756 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3350466 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3350466 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3350466 ']' 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:31.757 12:50:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:31.757 [2024-07-15 12:50:49.900866] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:31.757 [2024-07-15 12:50:49.900960] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.757 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.015 [2024-07-15 12:50:49.965448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:32.015 [2024-07-15 12:50:50.086206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:32.015 [2024-07-15 12:50:50.086274] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:32.015 [2024-07-15 12:50:50.086304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:32.015 [2024-07-15 12:50:50.086316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:32.015 [2024-07-15 12:50:50.086326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:32.015 [2024-07-15 12:50:50.086388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:32.015 [2024-07-15 12:50:50.086459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:32.015 [2024-07-15 12:50:50.086461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.015 12:50:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:32.015 12:50:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:32.015 12:50:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:32.015 12:50:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:32.015 12:50:50 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:32.272 12:50:50 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:32.272 12:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:32.272 [2024-07-15 12:50:50.462651] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:32.531 12:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:32.789 12:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:32.789 12:50:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:33.045 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:33.045 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:33.302 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:33.560 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=da310364-7dda-497e-b3b4-c88b43b1b222 00:11:33.560 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u da310364-7dda-497e-b3b4-c88b43b1b222 lvol 20 00:11:33.817 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=37deca1b-f5f2-4230-ba5a-0a4d66436332 00:11:33.817 12:50:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:34.075 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 37deca1b-f5f2-4230-ba5a-0a4d66436332 00:11:34.075 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:34.333 [2024-07-15 12:50:52.499200] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.333 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:34.590 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3350891 00:11:34.590 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:34.590 12:50:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:34.848 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.781 12:50:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 37deca1b-f5f2-4230-ba5a-0a4d66436332 MY_SNAPSHOT 00:11:36.039 12:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5539360c-e420-4481-9ea1-7a10d5a7f7e8 00:11:36.039 12:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 37deca1b-f5f2-4230-ba5a-0a4d66436332 30 00:11:36.297 12:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5539360c-e420-4481-9ea1-7a10d5a7f7e8 MY_CLONE 00:11:36.861 12:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5e0b41ba-b071-4c01-a908-2d5993e76a03 00:11:36.861 12:50:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5e0b41ba-b071-4c01-a908-2d5993e76a03 00:11:37.427 12:50:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3350891 00:11:45.615 Initializing NVMe Controllers 00:11:45.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:45.615 Controller IO queue size 128, less than required. 00:11:45.615 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:45.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:45.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:45.615 Initialization complete. Launching workers. 00:11:45.615 ======================================================== 00:11:45.615 Latency(us) 00:11:45.615 Device Information : IOPS MiB/s Average min max 00:11:45.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10734.96 41.93 11934.82 1680.10 76236.93 00:11:45.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10627.66 41.51 12055.83 2195.43 78422.16 00:11:45.615 ======================================================== 00:11:45.615 Total : 21362.62 83.45 11995.02 1680.10 78422.16 00:11:45.615 00:11:45.615 12:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:45.615 12:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 37deca1b-f5f2-4230-ba5a-0a4d66436332 00:11:45.615 12:51:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da310364-7dda-497e-b3b4-c88b43b1b222 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.872 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.872 rmmod nvme_tcp 00:11:45.872 rmmod nvme_fabrics 00:11:45.872 rmmod nvme_keyring 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3350466 ']' 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3350466 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3350466 ']' 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3350466 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3350466 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3350466' 00:11:46.130 killing process with pid 3350466 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3350466 00:11:46.130 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3350466 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.388 12:51:04 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.291 12:51:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.291 00:11:48.291 real 0m18.954s 00:11:48.291 user 1m4.507s 00:11:48.291 sys 0m5.724s 00:11:48.291 12:51:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:48.291 12:51:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:48.291 ************************************ 00:11:48.291 END TEST nvmf_lvol 00:11:48.291 ************************************ 00:11:48.550 12:51:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:48.550 12:51:06 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:48.550 12:51:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:48.550 12:51:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.550 12:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.550 ************************************ 00:11:48.550 START TEST nvmf_lvs_grow 00:11:48.550 ************************************ 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:48.550 * Looking for test storage... 00:11:48.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.550 12:51:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:51.082 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.082 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:51.083 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:51.083 Found net devices under 0000:84:00.0: cvl_0_0 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:51.083 Found net devices under 0000:84:00.1: cvl_0_1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:51.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:11:51.083 00:11:51.083 --- 10.0.0.2 ping statistics --- 00:11:51.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.083 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:11:51.083 00:11:51.083 --- 10.0.0.1 ping statistics --- 00:11:51.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.083 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3354696 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3354696 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3354696 ']' 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:51.083 12:51:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.083 [2024-07-15 12:51:09.023041] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:51.083 [2024-07-15 12:51:09.023127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.083 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.083 [2024-07-15 12:51:09.085920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.083 [2024-07-15 12:51:09.197505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.083 [2024-07-15 12:51:09.197572] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.083 [2024-07-15 12:51:09.197586] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.083 [2024-07-15 12:51:09.197618] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.083 [2024-07-15 12:51:09.197628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.083 [2024-07-15 12:51:09.197659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.340 12:51:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:51.598 [2024-07-15 12:51:09.568547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.598 ************************************ 00:11:51.598 START TEST lvs_grow_clean 00:11:51.598 ************************************ 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:51.598 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.855 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:51.855 12:51:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:52.113 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:11:52.113 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:11:52.113 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:52.370 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:52.370 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:52.370 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f lvol 150 00:11:52.627 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6aee7ba-451c-47f1-b979-e0bf392c773f 00:11:52.627 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:52.627 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:52.884 [2024-07-15 12:51:10.859869] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:52.884 [2024-07-15 12:51:10.859965] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:52.884 true 00:11:52.884 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:11:52.884 12:51:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:53.142 12:51:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:53.142 12:51:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:53.400 12:51:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6aee7ba-451c-47f1-b979-e0bf392c773f 00:11:53.400 12:51:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:53.658 [2024-07-15 12:51:11.826858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.658 12:51:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3355219 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3355219 /var/tmp/bdevperf.sock 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3355219 ']' 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:53.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:53.916 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:54.173 [2024-07-15 12:51:12.125955] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:11:54.173 [2024-07-15 12:51:12.126048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3355219 ] 00:11:54.173 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.173 [2024-07-15 12:51:12.185140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.173 [2024-07-15 12:51:12.293644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.430 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:54.430 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:11:54.430 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:54.687 Nvme0n1 00:11:54.687 12:51:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:54.944 [ 00:11:54.944 { 00:11:54.944 "name": "Nvme0n1", 00:11:54.944 "aliases": [ 00:11:54.944 "c6aee7ba-451c-47f1-b979-e0bf392c773f" 00:11:54.944 ], 00:11:54.944 "product_name": "NVMe disk", 00:11:54.944 "block_size": 4096, 00:11:54.944 "num_blocks": 38912, 00:11:54.944 "uuid": "c6aee7ba-451c-47f1-b979-e0bf392c773f", 00:11:54.944 "assigned_rate_limits": { 00:11:54.944 "rw_ios_per_sec": 0, 00:11:54.944 "rw_mbytes_per_sec": 0, 00:11:54.944 "r_mbytes_per_sec": 0, 00:11:54.944 "w_mbytes_per_sec": 0 00:11:54.944 }, 00:11:54.944 "claimed": false, 00:11:54.944 "zoned": false, 00:11:54.944 "supported_io_types": { 00:11:54.944 "read": true, 00:11:54.944 "write": true, 00:11:54.944 "unmap": true, 00:11:54.944 "flush": true, 00:11:54.944 "reset": true, 00:11:54.944 "nvme_admin": true, 00:11:54.944 "nvme_io": true, 00:11:54.944 "nvme_io_md": false, 00:11:54.944 "write_zeroes": true, 00:11:54.944 "zcopy": false, 00:11:54.944 "get_zone_info": false, 00:11:54.944 "zone_management": false, 00:11:54.944 "zone_append": false, 00:11:54.944 "compare": true, 00:11:54.945 "compare_and_write": true, 00:11:54.945 "abort": true, 00:11:54.945 "seek_hole": false, 00:11:54.945 "seek_data": false, 00:11:54.945 "copy": true, 00:11:54.945 "nvme_iov_md": false 00:11:54.945 }, 00:11:54.945 "memory_domains": [ 00:11:54.945 { 00:11:54.945 "dma_device_id": "system", 00:11:54.945 "dma_device_type": 1 00:11:54.945 } 00:11:54.945 ], 00:11:54.945 "driver_specific": { 00:11:54.945 "nvme": [ 00:11:54.945 { 00:11:54.945 "trid": { 00:11:54.945 "trtype": "TCP", 00:11:54.945 "adrfam": "IPv4", 00:11:54.945 "traddr": "10.0.0.2", 00:11:54.945 "trsvcid": "4420", 00:11:54.945 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:54.945 }, 00:11:54.945 "ctrlr_data": { 00:11:54.945 "cntlid": 1, 00:11:54.945 "vendor_id": "0x8086", 00:11:54.945 "model_number": "SPDK bdev Controller", 00:11:54.945 "serial_number": "SPDK0", 00:11:54.945 "firmware_revision": "24.09", 00:11:54.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:54.945 "oacs": { 00:11:54.945 "security": 0, 00:11:54.945 "format": 0, 00:11:54.945 "firmware": 0, 00:11:54.945 "ns_manage": 0 00:11:54.945 }, 00:11:54.945 "multi_ctrlr": true, 00:11:54.945 "ana_reporting": false 00:11:54.945 }, 00:11:54.945 "vs": { 00:11:54.945 "nvme_version": "1.3" 00:11:54.945 }, 00:11:54.945 "ns_data": { 00:11:54.945 "id": 1, 00:11:54.945 "can_share": true 00:11:54.945 } 00:11:54.945 } 00:11:54.945 ], 00:11:54.945 "mp_policy": "active_passive" 00:11:54.945 } 00:11:54.945 } 00:11:54.945 ] 00:11:54.945 12:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3355357 00:11:54.945 12:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:54.945 12:51:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:55.202 Running I/O for 10 seconds... 00:11:56.134 Latency(us) 00:11:56.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.134 Nvme0n1 : 1.00 16639.00 65.00 0.00 0.00 0.00 0.00 0.00 00:11:56.134 =================================================================================================================== 00:11:56.134 Total : 16639.00 65.00 0.00 0.00 0.00 0.00 0.00 00:11:56.134 00:11:57.064 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:11:57.064 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.064 Nvme0n1 : 2.00 16863.50 65.87 0.00 0.00 0.00 0.00 0.00 00:11:57.064 =================================================================================================================== 00:11:57.064 Total : 16863.50 65.87 0.00 0.00 0.00 0.00 0.00 00:11:57.064 00:11:57.321 true 00:11:57.321 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:11:57.321 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:57.578 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:57.578 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:57.578 12:51:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3355357 00:11:58.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.143 Nvme0n1 : 3.00 16985.67 66.35 0.00 0.00 0.00 0.00 0.00 00:11:58.143 =================================================================================================================== 00:11:58.143 Total : 16985.67 66.35 0.00 0.00 0.00 0.00 0.00 00:11:58.143 00:11:59.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.074 Nvme0n1 : 4.00 17083.25 66.73 0.00 0.00 0.00 0.00 0.00 00:11:59.074 =================================================================================================================== 00:11:59.074 Total : 17083.25 66.73 0.00 0.00 0.00 0.00 0.00 00:11:59.074 00:12:00.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.447 Nvme0n1 : 5.00 17114.40 66.85 0.00 0.00 0.00 0.00 0.00 00:12:00.447 =================================================================================================================== 00:12:00.447 Total : 17114.40 66.85 0.00 0.00 0.00 0.00 0.00 00:12:00.447 00:12:01.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.383 Nvme0n1 : 6.00 17196.17 67.17 0.00 0.00 0.00 0.00 0.00 00:12:01.383 =================================================================================================================== 00:12:01.383 Total : 17196.17 67.17 0.00 0.00 0.00 0.00 0.00 00:12:01.383 00:12:02.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.316 Nvme0n1 : 7.00 17181.57 67.12 0.00 0.00 0.00 0.00 0.00 00:12:02.316 =================================================================================================================== 00:12:02.316 Total : 17181.57 67.12 0.00 0.00 0.00 0.00 0.00 00:12:02.316 00:12:03.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.255 Nvme0n1 : 8.00 17189.38 67.15 0.00 0.00 0.00 0.00 0.00 00:12:03.255 =================================================================================================================== 00:12:03.255 Total : 17189.38 67.15 0.00 0.00 0.00 0.00 0.00 00:12:03.255 00:12:04.188 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.188 Nvme0n1 : 9.00 17225.00 67.29 0.00 0.00 0.00 0.00 0.00 00:12:04.188 =================================================================================================================== 00:12:04.188 Total : 17225.00 67.29 0.00 0.00 0.00 0.00 0.00 00:12:04.188 00:12:05.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.122 Nvme0n1 : 10.00 17245.40 67.36 0.00 0.00 0.00 0.00 0.00 00:12:05.122 =================================================================================================================== 00:12:05.122 Total : 17245.40 67.36 0.00 0.00 0.00 0.00 0.00 00:12:05.122 00:12:05.122 00:12:05.122 Latency(us) 00:12:05.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.122 Nvme0n1 : 10.00 17243.42 67.36 0.00 0.00 7418.86 2172.40 18738.44 00:12:05.122 =================================================================================================================== 00:12:05.122 Total : 17243.42 67.36 0.00 0.00 7418.86 2172.40 18738.44 00:12:05.122 0 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3355219 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3355219 ']' 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3355219 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3355219 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3355219' 00:12:05.122 killing process with pid 3355219 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3355219 00:12:05.122 Received shutdown signal, test time was about 10.000000 seconds 00:12:05.122 00:12:05.122 Latency(us) 00:12:05.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.122 =================================================================================================================== 00:12:05.122 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:05.122 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3355219 00:12:05.380 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.964 12:51:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:06.221 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:06.221 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:06.479 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:06.479 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:06.479 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:06.479 [2024-07-15 12:51:24.676471] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:06.738 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:06.996 request: 00:12:06.996 { 00:12:06.996 "uuid": "5c287d66-dc3d-48f4-856e-50bd0bdfd88f", 00:12:06.996 "method": "bdev_lvol_get_lvstores", 00:12:06.996 "req_id": 1 00:12:06.996 } 00:12:06.996 Got JSON-RPC error response 00:12:06.996 response: 00:12:06.996 { 00:12:06.996 "code": -19, 00:12:06.996 "message": "No such device" 00:12:06.996 } 00:12:06.996 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:12:06.996 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.996 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.996 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.996 12:51:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.253 aio_bdev 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6aee7ba-451c-47f1-b979-e0bf392c773f 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c6aee7ba-451c-47f1-b979-e0bf392c773f 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.253 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:07.511 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6aee7ba-451c-47f1-b979-e0bf392c773f -t 2000 00:12:07.769 [ 00:12:07.769 { 00:12:07.769 "name": "c6aee7ba-451c-47f1-b979-e0bf392c773f", 00:12:07.769 "aliases": [ 00:12:07.769 "lvs/lvol" 00:12:07.769 ], 00:12:07.769 "product_name": "Logical Volume", 00:12:07.769 "block_size": 4096, 00:12:07.769 "num_blocks": 38912, 00:12:07.769 "uuid": "c6aee7ba-451c-47f1-b979-e0bf392c773f", 00:12:07.769 "assigned_rate_limits": { 00:12:07.769 "rw_ios_per_sec": 0, 00:12:07.769 "rw_mbytes_per_sec": 0, 00:12:07.769 "r_mbytes_per_sec": 0, 00:12:07.769 "w_mbytes_per_sec": 0 00:12:07.769 }, 00:12:07.769 "claimed": false, 00:12:07.769 "zoned": false, 00:12:07.769 "supported_io_types": { 00:12:07.769 "read": true, 00:12:07.769 "write": true, 00:12:07.769 "unmap": true, 00:12:07.769 "flush": false, 00:12:07.769 "reset": true, 00:12:07.769 "nvme_admin": false, 00:12:07.769 "nvme_io": false, 00:12:07.769 "nvme_io_md": false, 00:12:07.769 "write_zeroes": true, 00:12:07.769 "zcopy": false, 00:12:07.769 "get_zone_info": false, 00:12:07.769 "zone_management": false, 00:12:07.769 "zone_append": false, 00:12:07.769 "compare": false, 00:12:07.769 "compare_and_write": false, 00:12:07.769 "abort": false, 00:12:07.769 "seek_hole": true, 00:12:07.769 "seek_data": true, 00:12:07.769 "copy": false, 00:12:07.769 "nvme_iov_md": false 00:12:07.769 }, 00:12:07.769 "driver_specific": { 00:12:07.769 "lvol": { 00:12:07.769 "lvol_store_uuid": "5c287d66-dc3d-48f4-856e-50bd0bdfd88f", 00:12:07.769 "base_bdev": "aio_bdev", 00:12:07.769 "thin_provision": false, 00:12:07.769 "num_allocated_clusters": 38, 00:12:07.769 "snapshot": false, 00:12:07.769 "clone": false, 00:12:07.769 "esnap_clone": false 00:12:07.769 } 00:12:07.769 } 00:12:07.769 } 00:12:07.769 ] 00:12:07.769 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:12:07.769 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:07.769 12:51:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:08.052 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:08.052 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:08.052 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.315 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.315 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6aee7ba-451c-47f1-b979-e0bf392c773f 00:12:08.574 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c287d66-dc3d-48f4-856e-50bd0bdfd88f 00:12:08.835 12:51:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.094 00:12:09.094 real 0m17.462s 00:12:09.094 user 0m16.955s 00:12:09.094 sys 0m1.897s 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:09.094 ************************************ 00:12:09.094 END TEST lvs_grow_clean 00:12:09.094 ************************************ 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:09.094 ************************************ 00:12:09.094 START TEST lvs_grow_dirty 00:12:09.094 ************************************ 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:09.094 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:09.351 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:09.351 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:09.609 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:09.609 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:09.609 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:09.866 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:09.866 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:09.866 12:51:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b02c306c-08b4-46c8-bf68-fb00522c7e93 lvol 150 00:12:10.125 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:10.125 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:10.125 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:10.384 [2024-07-15 12:51:28.446946] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:10.384 [2024-07-15 12:51:28.447052] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:10.384 true 00:12:10.384 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:10.384 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:10.643 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:10.643 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:10.902 12:51:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:11.161 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:11.419 [2024-07-15 12:51:29.490122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.419 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3357277 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3357277 /var/tmp/bdevperf.sock 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3357277 ']' 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.677 12:51:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:11.677 [2024-07-15 12:51:29.842432] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:11.677 [2024-07-15 12:51:29.842520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357277 ] 00:12:11.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.935 [2024-07-15 12:51:29.901973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.935 [2024-07-15 12:51:30.021238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.935 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:11.935 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:11.935 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:12.503 Nvme0n1 00:12:12.503 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:12.761 [ 00:12:12.761 { 00:12:12.761 "name": "Nvme0n1", 00:12:12.761 "aliases": [ 00:12:12.761 "ecaefaee-5399-4ca4-9fc4-37148a3f5421" 00:12:12.761 ], 00:12:12.761 "product_name": "NVMe disk", 00:12:12.761 "block_size": 4096, 00:12:12.761 "num_blocks": 38912, 00:12:12.761 "uuid": "ecaefaee-5399-4ca4-9fc4-37148a3f5421", 00:12:12.761 "assigned_rate_limits": { 00:12:12.761 "rw_ios_per_sec": 0, 00:12:12.761 "rw_mbytes_per_sec": 0, 00:12:12.761 "r_mbytes_per_sec": 0, 00:12:12.761 "w_mbytes_per_sec": 0 00:12:12.761 }, 00:12:12.761 "claimed": false, 00:12:12.761 "zoned": false, 00:12:12.761 "supported_io_types": { 00:12:12.761 "read": true, 00:12:12.761 "write": true, 00:12:12.761 "unmap": true, 00:12:12.761 "flush": true, 00:12:12.761 "reset": true, 00:12:12.761 "nvme_admin": true, 00:12:12.761 "nvme_io": true, 00:12:12.761 "nvme_io_md": false, 00:12:12.761 "write_zeroes": true, 00:12:12.761 "zcopy": false, 00:12:12.761 "get_zone_info": false, 00:12:12.761 "zone_management": false, 00:12:12.761 "zone_append": false, 00:12:12.761 "compare": true, 00:12:12.761 "compare_and_write": true, 00:12:12.761 "abort": true, 00:12:12.761 "seek_hole": false, 00:12:12.761 "seek_data": false, 00:12:12.761 "copy": true, 00:12:12.761 "nvme_iov_md": false 00:12:12.761 }, 00:12:12.761 "memory_domains": [ 00:12:12.761 { 00:12:12.761 "dma_device_id": "system", 00:12:12.761 "dma_device_type": 1 00:12:12.761 } 00:12:12.761 ], 00:12:12.761 "driver_specific": { 00:12:12.761 "nvme": [ 00:12:12.761 { 00:12:12.761 "trid": { 00:12:12.761 "trtype": "TCP", 00:12:12.761 "adrfam": "IPv4", 00:12:12.761 "traddr": "10.0.0.2", 00:12:12.761 "trsvcid": "4420", 00:12:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:12.761 }, 00:12:12.761 "ctrlr_data": { 00:12:12.761 "cntlid": 1, 00:12:12.761 "vendor_id": "0x8086", 00:12:12.761 "model_number": "SPDK bdev Controller", 00:12:12.761 "serial_number": "SPDK0", 00:12:12.761 "firmware_revision": "24.09", 00:12:12.761 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:12.761 "oacs": { 00:12:12.761 "security": 0, 00:12:12.761 "format": 0, 00:12:12.761 "firmware": 0, 00:12:12.761 "ns_manage": 0 00:12:12.761 }, 00:12:12.761 "multi_ctrlr": true, 00:12:12.761 "ana_reporting": false 00:12:12.761 }, 00:12:12.761 "vs": { 00:12:12.761 "nvme_version": "1.3" 00:12:12.761 }, 00:12:12.761 "ns_data": { 00:12:12.761 "id": 1, 00:12:12.761 "can_share": true 00:12:12.761 } 00:12:12.761 } 00:12:12.761 ], 00:12:12.761 "mp_policy": "active_passive" 00:12:12.761 } 00:12:12.761 } 00:12:12.761 ] 00:12:12.761 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3357412 00:12:12.761 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:12.761 12:51:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:12.761 Running I/O for 10 seconds... 00:12:14.141 Latency(us) 00:12:14.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.141 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.141 Nvme0n1 : 1.00 16833.00 65.75 0.00 0.00 0.00 0.00 0.00 00:12:14.141 =================================================================================================================== 00:12:14.141 Total : 16833.00 65.75 0.00 0.00 0.00 0.00 0.00 00:12:14.141 00:12:14.710 12:51:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:14.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:14.967 Nvme0n1 : 2.00 16964.50 66.27 0.00 0.00 0.00 0.00 0.00 00:12:14.967 =================================================================================================================== 00:12:14.967 Total : 16964.50 66.27 0.00 0.00 0.00 0.00 0.00 00:12:14.967 00:12:14.967 true 00:12:14.967 12:51:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:14.967 12:51:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:15.547 12:51:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:15.547 12:51:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:15.547 12:51:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3357412 00:12:15.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.806 Nvme0n1 : 3.00 17055.33 66.62 0.00 0.00 0.00 0.00 0.00 00:12:15.806 =================================================================================================================== 00:12:15.806 Total : 17055.33 66.62 0.00 0.00 0.00 0.00 0.00 00:12:15.806 00:12:17.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.185 Nvme0n1 : 4.00 17128.25 66.91 0.00 0.00 0.00 0.00 0.00 00:12:17.185 =================================================================================================================== 00:12:17.185 Total : 17128.25 66.91 0.00 0.00 0.00 0.00 0.00 00:12:17.185 00:12:17.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.752 Nvme0n1 : 5.00 17176.60 67.10 0.00 0.00 0.00 0.00 0.00 00:12:17.752 =================================================================================================================== 00:12:17.752 Total : 17176.60 67.10 0.00 0.00 0.00 0.00 0.00 00:12:17.752 00:12:19.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.131 Nvme0n1 : 6.00 17185.83 67.13 0.00 0.00 0.00 0.00 0.00 00:12:19.131 =================================================================================================================== 00:12:19.131 Total : 17185.83 67.13 0.00 0.00 0.00 0.00 0.00 00:12:19.131 00:12:20.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.067 Nvme0n1 : 7.00 17214.71 67.24 0.00 0.00 0.00 0.00 0.00 00:12:20.067 =================================================================================================================== 00:12:20.067 Total : 17214.71 67.24 0.00 0.00 0.00 0.00 0.00 00:12:20.067 00:12:21.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.004 Nvme0n1 : 8.00 17246.50 67.37 0.00 0.00 0.00 0.00 0.00 00:12:21.004 =================================================================================================================== 00:12:21.004 Total : 17246.50 67.37 0.00 0.00 0.00 0.00 0.00 00:12:21.004 00:12:21.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.939 Nvme0n1 : 9.00 17295.11 67.56 0.00 0.00 0.00 0.00 0.00 00:12:21.939 =================================================================================================================== 00:12:21.939 Total : 17295.11 67.56 0.00 0.00 0.00 0.00 0.00 00:12:21.939 00:12:22.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.876 Nvme0n1 : 10.00 17290.70 67.54 0.00 0.00 0.00 0.00 0.00 00:12:22.876 =================================================================================================================== 00:12:22.876 Total : 17290.70 67.54 0.00 0.00 0.00 0.00 0.00 00:12:22.876 00:12:22.876 00:12:22.876 Latency(us) 00:12:22.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.876 Nvme0n1 : 10.00 17295.64 67.56 0.00 0.00 7396.79 1990.35 16699.54 00:12:22.876 =================================================================================================================== 00:12:22.876 Total : 17295.64 67.56 0.00 0.00 7396.79 1990.35 16699.54 00:12:22.876 0 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3357277 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3357277 ']' 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3357277 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:22.876 12:51:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3357277 00:12:22.876 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:22.876 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:22.876 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3357277' 00:12:22.876 killing process with pid 3357277 00:12:22.876 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3357277 00:12:22.876 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.876 00:12:22.876 Latency(us) 00:12:22.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.876 =================================================================================================================== 00:12:22.876 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.876 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3357277 00:12:23.135 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:23.393 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:23.651 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:23.651 12:51:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3354696 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3354696 00:12:23.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3354696 Killed "${NVMF_APP[@]}" "$@" 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3358743 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3358743 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3358743 ']' 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.910 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.168 [2024-07-15 12:51:42.126074] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:24.168 [2024-07-15 12:51:42.126167] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:24.168 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.168 [2024-07-15 12:51:42.190175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.168 [2024-07-15 12:51:42.295447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:24.168 [2024-07-15 12:51:42.295508] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:24.168 [2024-07-15 12:51:42.295536] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:24.168 [2024-07-15 12:51:42.295547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:24.168 [2024-07-15 12:51:42.295557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:24.168 [2024-07-15 12:51:42.295589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.426 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:24.685 [2024-07-15 12:51:42.653888] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:24.685 [2024-07-15 12:51:42.654064] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:24.685 [2024-07-15 12:51:42.654111] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:24.685 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:24.943 12:51:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ecaefaee-5399-4ca4-9fc4-37148a3f5421 -t 2000 00:12:24.943 [ 00:12:24.943 { 00:12:24.943 "name": "ecaefaee-5399-4ca4-9fc4-37148a3f5421", 00:12:24.943 "aliases": [ 00:12:24.943 "lvs/lvol" 00:12:24.943 ], 00:12:24.943 "product_name": "Logical Volume", 00:12:24.943 "block_size": 4096, 00:12:24.943 "num_blocks": 38912, 00:12:24.943 "uuid": "ecaefaee-5399-4ca4-9fc4-37148a3f5421", 00:12:24.943 "assigned_rate_limits": { 00:12:24.943 "rw_ios_per_sec": 0, 00:12:24.943 "rw_mbytes_per_sec": 0, 00:12:24.943 "r_mbytes_per_sec": 0, 00:12:24.943 "w_mbytes_per_sec": 0 00:12:24.943 }, 00:12:24.943 "claimed": false, 00:12:24.943 "zoned": false, 00:12:24.943 "supported_io_types": { 00:12:24.943 "read": true, 00:12:24.943 "write": true, 00:12:24.943 "unmap": true, 00:12:24.943 "flush": false, 00:12:24.943 "reset": true, 00:12:24.943 "nvme_admin": false, 00:12:24.943 "nvme_io": false, 00:12:24.943 "nvme_io_md": false, 00:12:24.943 "write_zeroes": true, 00:12:24.943 "zcopy": false, 00:12:24.943 "get_zone_info": false, 00:12:24.943 "zone_management": false, 00:12:24.943 "zone_append": false, 00:12:24.943 "compare": false, 00:12:24.943 "compare_and_write": false, 00:12:24.943 "abort": false, 00:12:24.943 "seek_hole": true, 00:12:24.943 "seek_data": true, 00:12:24.943 "copy": false, 00:12:24.943 "nvme_iov_md": false 00:12:24.943 }, 00:12:24.943 "driver_specific": { 00:12:24.943 "lvol": { 00:12:24.943 "lvol_store_uuid": "b02c306c-08b4-46c8-bf68-fb00522c7e93", 00:12:24.943 "base_bdev": "aio_bdev", 00:12:24.943 "thin_provision": false, 00:12:24.943 "num_allocated_clusters": 38, 00:12:24.943 "snapshot": false, 00:12:24.943 "clone": false, 00:12:24.943 "esnap_clone": false 00:12:24.943 } 00:12:24.943 } 00:12:24.943 } 00:12:24.943 ] 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:25.200 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:25.765 [2024-07-15 12:51:43.918970] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:25.765 12:51:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:26.022 request: 00:12:26.022 { 00:12:26.022 "uuid": "b02c306c-08b4-46c8-bf68-fb00522c7e93", 00:12:26.023 "method": "bdev_lvol_get_lvstores", 00:12:26.023 "req_id": 1 00:12:26.023 } 00:12:26.023 Got JSON-RPC error response 00:12:26.023 response: 00:12:26.023 { 00:12:26.023 "code": -19, 00:12:26.023 "message": "No such device" 00:12:26.023 } 00:12:26.023 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:26.023 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:26.023 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:26.023 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:26.023 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.281 aio_bdev 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:26.281 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:26.539 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ecaefaee-5399-4ca4-9fc4-37148a3f5421 -t 2000 00:12:26.796 [ 00:12:26.796 { 00:12:26.796 "name": "ecaefaee-5399-4ca4-9fc4-37148a3f5421", 00:12:26.796 "aliases": [ 00:12:26.796 "lvs/lvol" 00:12:26.796 ], 00:12:26.796 "product_name": "Logical Volume", 00:12:26.796 "block_size": 4096, 00:12:26.796 "num_blocks": 38912, 00:12:26.796 "uuid": "ecaefaee-5399-4ca4-9fc4-37148a3f5421", 00:12:26.796 "assigned_rate_limits": { 00:12:26.796 "rw_ios_per_sec": 0, 00:12:26.796 "rw_mbytes_per_sec": 0, 00:12:26.796 "r_mbytes_per_sec": 0, 00:12:26.796 "w_mbytes_per_sec": 0 00:12:26.796 }, 00:12:26.796 "claimed": false, 00:12:26.796 "zoned": false, 00:12:26.797 "supported_io_types": { 00:12:26.797 "read": true, 00:12:26.797 "write": true, 00:12:26.797 "unmap": true, 00:12:26.797 "flush": false, 00:12:26.797 "reset": true, 00:12:26.797 "nvme_admin": false, 00:12:26.797 "nvme_io": false, 00:12:26.797 "nvme_io_md": false, 00:12:26.797 "write_zeroes": true, 00:12:26.797 "zcopy": false, 00:12:26.797 "get_zone_info": false, 00:12:26.797 "zone_management": false, 00:12:26.797 "zone_append": false, 00:12:26.797 "compare": false, 00:12:26.797 "compare_and_write": false, 00:12:26.797 "abort": false, 00:12:26.797 "seek_hole": true, 00:12:26.797 "seek_data": true, 00:12:26.797 "copy": false, 00:12:26.797 "nvme_iov_md": false 00:12:26.797 }, 00:12:26.797 "driver_specific": { 00:12:26.797 "lvol": { 00:12:26.797 "lvol_store_uuid": "b02c306c-08b4-46c8-bf68-fb00522c7e93", 00:12:26.797 "base_bdev": "aio_bdev", 00:12:26.797 "thin_provision": false, 00:12:26.797 "num_allocated_clusters": 38, 00:12:26.797 "snapshot": false, 00:12:26.797 "clone": false, 00:12:26.797 "esnap_clone": false 00:12:26.797 } 00:12:26.797 } 00:12:26.797 } 00:12:26.797 ] 00:12:26.797 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:12:26.797 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:26.797 12:51:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:27.053 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:27.053 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:27.053 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:27.311 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:27.311 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ecaefaee-5399-4ca4-9fc4-37148a3f5421 00:12:27.568 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b02c306c-08b4-46c8-bf68-fb00522c7e93 00:12:27.825 12:51:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:28.083 00:12:28.083 real 0m19.050s 00:12:28.083 user 0m48.079s 00:12:28.083 sys 0m5.028s 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:28.083 ************************************ 00:12:28.083 END TEST lvs_grow_dirty 00:12:28.083 ************************************ 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:28.083 nvmf_trace.0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.083 rmmod nvme_tcp 00:12:28.083 rmmod nvme_fabrics 00:12:28.083 rmmod nvme_keyring 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3358743 ']' 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3358743 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3358743 ']' 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3358743 00:12:28.083 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3358743 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3358743' 00:12:28.341 killing process with pid 3358743 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3358743 00:12:28.341 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3358743 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.601 12:51:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.536 12:51:48 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.536 00:12:30.536 real 0m42.070s 00:12:30.536 user 1m10.653s 00:12:30.536 sys 0m8.934s 00:12:30.536 12:51:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.536 12:51:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.536 ************************************ 00:12:30.536 END TEST nvmf_lvs_grow 00:12:30.536 ************************************ 00:12:30.536 12:51:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:30.536 12:51:48 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:30.536 12:51:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.536 12:51:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.536 12:51:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:30.536 ************************************ 00:12:30.536 START TEST nvmf_bdev_io_wait 00:12:30.536 ************************************ 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:30.536 * Looking for test storage... 00:12:30.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.536 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.537 12:51:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:33.071 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:33.071 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:33.071 Found net devices under 0000:84:00.0: cvl_0_0 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:33.071 Found net devices under 0000:84:00.1: cvl_0_1 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:33.071 00:12:33.071 --- 10.0.0.2 ping statistics --- 00:12:33.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.071 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:33.071 00:12:33.071 --- 10.0.0.1 ping statistics --- 00:12:33.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.071 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.071 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3361282 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3361282 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3361282 ']' 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.072 12:51:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 [2024-07-15 12:51:51.011345] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:33.072 [2024-07-15 12:51:51.011426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.072 [2024-07-15 12:51:51.073665] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.072 [2024-07-15 12:51:51.177736] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.072 [2024-07-15 12:51:51.177797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.072 [2024-07-15 12:51:51.177825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.072 [2024-07-15 12:51:51.177837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.072 [2024-07-15 12:51:51.177846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.072 [2024-07-15 12:51:51.177926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.072 [2024-07-15 12:51:51.177992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.072 [2024-07-15 12:51:51.178126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.072 [2024-07-15 12:51:51.178129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.072 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 [2024-07-15 12:51:51.318375] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 Malloc0 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.331 [2024-07-15 12:51:51.383482] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3361307 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3361308 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3361311 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.331 { 00:12:33.331 "params": { 00:12:33.331 "name": "Nvme$subsystem", 00:12:33.331 "trtype": "$TEST_TRANSPORT", 00:12:33.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.331 "adrfam": "ipv4", 00:12:33.331 "trsvcid": "$NVMF_PORT", 00:12:33.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.331 "hdgst": ${hdgst:-false}, 00:12:33.331 "ddgst": ${ddgst:-false} 00:12:33.331 }, 00:12:33.331 "method": "bdev_nvme_attach_controller" 00:12:33.331 } 00:12:33.331 EOF 00:12:33.331 )") 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3361313 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.331 { 00:12:33.331 "params": { 00:12:33.331 "name": "Nvme$subsystem", 00:12:33.331 "trtype": "$TEST_TRANSPORT", 00:12:33.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.331 "adrfam": "ipv4", 00:12:33.331 "trsvcid": "$NVMF_PORT", 00:12:33.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.331 "hdgst": ${hdgst:-false}, 00:12:33.331 "ddgst": ${ddgst:-false} 00:12:33.331 }, 00:12:33.331 "method": "bdev_nvme_attach_controller" 00:12:33.331 } 00:12:33.331 EOF 00:12:33.331 )") 00:12:33.331 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.332 { 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme$subsystem", 00:12:33.332 "trtype": "$TEST_TRANSPORT", 00:12:33.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "$NVMF_PORT", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.332 "hdgst": ${hdgst:-false}, 00:12:33.332 "ddgst": ${ddgst:-false} 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 } 00:12:33.332 EOF 00:12:33.332 )") 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.332 { 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme$subsystem", 00:12:33.332 "trtype": "$TEST_TRANSPORT", 00:12:33.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "$NVMF_PORT", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.332 "hdgst": ${hdgst:-false}, 00:12:33.332 "ddgst": ${ddgst:-false} 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 } 00:12:33.332 EOF 00:12:33.332 )") 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3361307 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme1", 00:12:33.332 "trtype": "tcp", 00:12:33.332 "traddr": "10.0.0.2", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "4420", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.332 "hdgst": false, 00:12:33.332 "ddgst": false 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 }' 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme1", 00:12:33.332 "trtype": "tcp", 00:12:33.332 "traddr": "10.0.0.2", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "4420", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.332 "hdgst": false, 00:12:33.332 "ddgst": false 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 }' 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme1", 00:12:33.332 "trtype": "tcp", 00:12:33.332 "traddr": "10.0.0.2", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "4420", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.332 "hdgst": false, 00:12:33.332 "ddgst": false 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 }' 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.332 12:51:51 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.332 "params": { 00:12:33.332 "name": "Nvme1", 00:12:33.332 "trtype": "tcp", 00:12:33.332 "traddr": "10.0.0.2", 00:12:33.332 "adrfam": "ipv4", 00:12:33.332 "trsvcid": "4420", 00:12:33.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.332 "hdgst": false, 00:12:33.332 "ddgst": false 00:12:33.332 }, 00:12:33.332 "method": "bdev_nvme_attach_controller" 00:12:33.332 }' 00:12:33.332 [2024-07-15 12:51:51.432616] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:33.332 [2024-07-15 12:51:51.432635] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:33.332 [2024-07-15 12:51:51.432636] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:33.332 [2024-07-15 12:51:51.432635] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:33.332 [2024-07-15 12:51:51.432693] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:33.332 [2024-07-15 12:51:51.432719] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 12:51:51.432719] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 12:51:51.432719] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:33.332 --proc-type=auto ] 00:12:33.332 --proc-type=auto ] 00:12:33.332 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.590 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.590 [2024-07-15 12:51:51.617329] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.590 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.590 [2024-07-15 12:51:51.716104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:33.590 [2024-07-15 12:51:51.718522] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.590 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.849 [2024-07-15 12:51:51.817341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:33.849 [2024-07-15 12:51:51.819641] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.849 [2024-07-15 12:51:51.890836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.849 [2024-07-15 12:51:51.920249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:33.849 [2024-07-15 12:51:51.988237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:33.849 Running I/O for 1 seconds... 00:12:33.849 Running I/O for 1 seconds... 00:12:34.108 Running I/O for 1 seconds... 00:12:34.108 Running I/O for 1 seconds... 00:12:35.047 00:12:35.047 Latency(us) 00:12:35.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.047 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:35.047 Nvme1n1 : 1.01 10767.28 42.06 0.00 0.00 11834.92 8107.05 18641.35 00:12:35.047 =================================================================================================================== 00:12:35.047 Total : 10767.28 42.06 0.00 0.00 11834.92 8107.05 18641.35 00:12:35.047 00:12:35.047 Latency(us) 00:12:35.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.047 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:35.047 Nvme1n1 : 1.00 198620.02 775.86 0.00 0.00 641.86 260.93 885.95 00:12:35.047 =================================================================================================================== 00:12:35.047 Total : 198620.02 775.86 0.00 0.00 641.86 260.93 885.95 00:12:35.047 00:12:35.047 Latency(us) 00:12:35.047 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.047 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:35.047 Nvme1n1 : 1.01 8746.01 34.16 0.00 0.00 14568.62 7961.41 26602.76 00:12:35.047 =================================================================================================================== 00:12:35.047 Total : 8746.01 34.16 0.00 0.00 14568.62 7961.41 26602.76 00:12:35.305 00:12:35.305 Latency(us) 00:12:35.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.305 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:35.305 Nvme1n1 : 1.01 9601.03 37.50 0.00 0.00 13278.33 6650.69 26796.94 00:12:35.305 =================================================================================================================== 00:12:35.305 Total : 9601.03 37.50 0.00 0.00 13278.33 6650.69 26796.94 00:12:35.305 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3361308 00:12:35.305 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3361311 00:12:35.305 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3361313 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.563 rmmod nvme_tcp 00:12:35.563 rmmod nvme_fabrics 00:12:35.563 rmmod nvme_keyring 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3361282 ']' 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3361282 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3361282 ']' 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3361282 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3361282 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3361282' 00:12:35.563 killing process with pid 3361282 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3361282 00:12:35.563 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3361282 00:12:35.819 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.819 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.819 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.819 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.820 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.820 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.820 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.820 12:51:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.771 12:51:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.771 00:12:37.771 real 0m7.295s 00:12:37.771 user 0m16.805s 00:12:37.771 sys 0m3.692s 00:12:37.771 12:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.771 12:51:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:37.771 ************************************ 00:12:37.771 END TEST nvmf_bdev_io_wait 00:12:37.771 ************************************ 00:12:37.771 12:51:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.771 12:51:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:37.771 12:51:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.771 12:51:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.771 12:51:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:38.029 ************************************ 00:12:38.029 START TEST nvmf_queue_depth 00:12:38.029 ************************************ 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:38.029 * Looking for test storage... 00:12:38.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.029 12:51:56 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:40.562 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:40.562 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:40.562 Found net devices under 0000:84:00.0: cvl_0_0 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.562 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:40.563 Found net devices under 0000:84:00.1: cvl_0_1 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:12:40.563 00:12:40.563 --- 10.0.0.2 ping statistics --- 00:12:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.563 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:12:40.563 00:12:40.563 --- 10.0.0.1 ping statistics --- 00:12:40.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.563 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3363614 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3363614 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3363614 ']' 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:40.563 12:51:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:40.563 [2024-07-15 12:51:58.382787] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:40.563 [2024-07-15 12:51:58.382870] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.563 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.563 [2024-07-15 12:51:58.453338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.563 [2024-07-15 12:51:58.563910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.563 [2024-07-15 12:51:58.563969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.563 [2024-07-15 12:51:58.563984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.563 [2024-07-15 12:51:58.563996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.563 [2024-07-15 12:51:58.564007] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.563 [2024-07-15 12:51:58.564064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.131 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.131 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:41.131 12:51:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.131 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:41.131 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 [2024-07-15 12:51:59.344447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 Malloc0 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 [2024-07-15 12:51:59.401546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3363716 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3363716 /var/tmp/bdevperf.sock 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3363716 ']' 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:41.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.390 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.390 [2024-07-15 12:51:59.451891] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:12:41.390 [2024-07-15 12:51:59.451977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3363716 ] 00:12:41.390 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.390 [2024-07-15 12:51:59.516987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.649 [2024-07-15 12:51:59.627297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.649 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:41.649 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:12:41.649 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:41.649 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.649 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:41.910 NVMe0n1 00:12:41.910 12:51:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.910 12:51:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:41.910 Running I/O for 10 seconds... 00:12:51.898 00:12:51.898 Latency(us) 00:12:51.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.898 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:51.898 Verification LBA range: start 0x0 length 0x4000 00:12:51.898 NVMe0n1 : 10.08 9816.69 38.35 0.00 0.00 103891.36 20680.25 64468.01 00:12:51.898 =================================================================================================================== 00:12:51.898 Total : 9816.69 38.35 0.00 0.00 103891.36 20680.25 64468.01 00:12:51.898 0 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3363716 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3363716 ']' 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3363716 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:51.898 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3363716 00:12:52.157 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.157 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.157 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3363716' 00:12:52.157 killing process with pid 3363716 00:12:52.157 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3363716 00:12:52.157 Received shutdown signal, test time was about 10.000000 seconds 00:12:52.157 00:12:52.157 Latency(us) 00:12:52.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:52.157 =================================================================================================================== 00:12:52.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:52.157 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3363716 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.417 rmmod nvme_tcp 00:12:52.417 rmmod nvme_fabrics 00:12:52.417 rmmod nvme_keyring 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3363614 ']' 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3363614 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3363614 ']' 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3363614 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3363614 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3363614' 00:12:52.417 killing process with pid 3363614 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3363614 00:12:52.417 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3363614 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.676 12:52:10 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.677 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.677 12:52:10 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.215 12:52:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.215 00:12:55.215 real 0m16.802s 00:12:55.215 user 0m23.159s 00:12:55.215 sys 0m3.444s 00:12:55.215 12:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.215 12:52:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 ************************************ 00:12:55.215 END TEST nvmf_queue_depth 00:12:55.215 ************************************ 00:12:55.215 12:52:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:55.215 12:52:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:55.215 12:52:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.215 12:52:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.215 12:52:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.215 ************************************ 00:12:55.215 START TEST nvmf_target_multipath 00:12:55.215 ************************************ 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:55.215 * Looking for test storage... 00:12:55.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.215 12:52:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:57.116 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:57.116 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:57.116 Found net devices under 0000:84:00.0: cvl_0_0 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:57.116 Found net devices under 0000:84:00.1: cvl_0_1 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.116 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:12:57.117 00:12:57.117 --- 10.0.0.2 ping statistics --- 00:12:57.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.117 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:12:57.117 00:12:57.117 --- 10.0.0.1 ping statistics --- 00:12:57.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.117 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:57.117 only one NIC for nvmf test 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:57.117 rmmod nvme_tcp 00:12:57.117 rmmod nvme_fabrics 00:12:57.117 rmmod nvme_keyring 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:57.117 12:52:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:59.650 00:12:59.650 real 0m4.446s 00:12:59.650 user 0m0.887s 00:12:59.650 sys 0m1.559s 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:59.650 12:52:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:59.650 ************************************ 00:12:59.650 END TEST nvmf_target_multipath 00:12:59.650 ************************************ 00:12:59.650 12:52:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:59.650 12:52:17 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:59.650 12:52:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:59.650 12:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.650 12:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.650 ************************************ 00:12:59.650 START TEST nvmf_zcopy 00:12:59.650 ************************************ 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:59.650 * Looking for test storage... 00:12:59.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:59.650 12:52:17 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:01.556 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.556 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:01.556 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:01.557 Found net devices under 0000:84:00.0: cvl_0_0 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:01.557 Found net devices under 0000:84:00.1: cvl_0_1 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:13:01.557 00:13:01.557 --- 10.0.0.2 ping statistics --- 00:13:01.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.557 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:13:01.557 00:13:01.557 --- 10.0.0.1 ping statistics --- 00:13:01.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.557 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3368905 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3368905 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3368905 ']' 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.557 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.557 [2024-07-15 12:52:19.661913] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:13:01.557 [2024-07-15 12:52:19.662019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.557 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.557 [2024-07-15 12:52:19.727117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.814 [2024-07-15 12:52:19.830587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.814 [2024-07-15 12:52:19.830645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.814 [2024-07-15 12:52:19.830672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.814 [2024-07-15 12:52:19.830683] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.814 [2024-07-15 12:52:19.830693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.814 [2024-07-15 12:52:19.830719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.814 [2024-07-15 12:52:19.968377] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.814 [2024-07-15 12:52:19.984522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.814 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.815 12:52:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.815 malloc0 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:01.815 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:02.084 { 00:13:02.084 "params": { 00:13:02.084 "name": "Nvme$subsystem", 00:13:02.084 "trtype": "$TEST_TRANSPORT", 00:13:02.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:02.084 "adrfam": "ipv4", 00:13:02.084 "trsvcid": "$NVMF_PORT", 00:13:02.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:02.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:02.084 "hdgst": ${hdgst:-false}, 00:13:02.084 "ddgst": ${ddgst:-false} 00:13:02.084 }, 00:13:02.084 "method": "bdev_nvme_attach_controller" 00:13:02.084 } 00:13:02.084 EOF 00:13:02.084 )") 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:02.084 12:52:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:02.084 "params": { 00:13:02.084 "name": "Nvme1", 00:13:02.084 "trtype": "tcp", 00:13:02.084 "traddr": "10.0.0.2", 00:13:02.084 "adrfam": "ipv4", 00:13:02.084 "trsvcid": "4420", 00:13:02.084 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:02.084 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:02.084 "hdgst": false, 00:13:02.084 "ddgst": false 00:13:02.084 }, 00:13:02.084 "method": "bdev_nvme_attach_controller" 00:13:02.084 }' 00:13:02.084 [2024-07-15 12:52:20.065970] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:13:02.084 [2024-07-15 12:52:20.066082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369003 ] 00:13:02.084 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.084 [2024-07-15 12:52:20.127661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.084 [2024-07-15 12:52:20.247263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.388 Running I/O for 10 seconds... 00:13:12.352 00:13:12.352 Latency(us) 00:13:12.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.352 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:12.352 Verification LBA range: start 0x0 length 0x1000 00:13:12.352 Nvme1n1 : 10.01 6533.31 51.04 0.00 0.00 19541.41 2694.26 28350.39 00:13:12.352 =================================================================================================================== 00:13:12.352 Total : 6533.31 51.04 0.00 0.00 19541.41 2694.26 28350.39 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3370241 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:12.609 { 00:13:12.609 "params": { 00:13:12.609 "name": "Nvme$subsystem", 00:13:12.609 "trtype": "$TEST_TRANSPORT", 00:13:12.609 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:12.609 "adrfam": "ipv4", 00:13:12.609 "trsvcid": "$NVMF_PORT", 00:13:12.609 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:12.609 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:12.609 "hdgst": ${hdgst:-false}, 00:13:12.609 "ddgst": ${ddgst:-false} 00:13:12.609 }, 00:13:12.609 "method": "bdev_nvme_attach_controller" 00:13:12.609 } 00:13:12.609 EOF 00:13:12.609 )") 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:12.609 [2024-07-15 12:52:30.790332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.609 [2024-07-15 12:52:30.790373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:12.609 12:52:30 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:12.609 "params": { 00:13:12.609 "name": "Nvme1", 00:13:12.609 "trtype": "tcp", 00:13:12.609 "traddr": "10.0.0.2", 00:13:12.609 "adrfam": "ipv4", 00:13:12.609 "trsvcid": "4420", 00:13:12.609 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.609 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:12.609 "hdgst": false, 00:13:12.609 "ddgst": false 00:13:12.609 }, 00:13:12.609 "method": "bdev_nvme_attach_controller" 00:13:12.609 }' 00:13:12.609 [2024-07-15 12:52:30.798286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.609 [2024-07-15 12:52:30.798307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.609 [2024-07-15 12:52:30.806308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.609 [2024-07-15 12:52:30.806328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.609 [2024-07-15 12:52:30.814357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.609 [2024-07-15 12:52:30.814379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.822357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.822382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.830379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.830399] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.832250] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:13:12.866 [2024-07-15 12:52:30.832320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370241 ] 00:13:12.866 [2024-07-15 12:52:30.838401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.838421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.846424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.846443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.854445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.854472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.866 [2024-07-15 12:52:30.862482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.862501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.870489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.870508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.878510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.878529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.886532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.886551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.891376] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.866 [2024-07-15 12:52:30.894558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.894578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.902615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.902650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.910599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.910620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.918618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.918638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.926639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.926659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.934661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.934680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.942700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.942735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.866 [2024-07-15 12:52:30.950707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.866 [2024-07-15 12:52:30.950756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.958781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.958816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.966775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.966797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.974792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.974812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.982805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.982826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.990824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.990844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:30.998860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:30.998881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.004763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.867 [2024-07-15 12:52:31.006868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.006888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.014890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.014911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.022942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.022974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.030968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.031004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.038987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.039039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.047015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.047066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.055060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.055109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.063090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.063127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:12.867 [2024-07-15 12:52:31.071076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:12.867 [2024-07-15 12:52:31.071102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.079097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.079123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.087154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.087190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.095171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.095204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.103142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.103162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.111164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.111183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.119195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.119217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.127217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.127240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.135237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.135258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.143271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.125 [2024-07-15 12:52:31.143294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.125 [2024-07-15 12:52:31.151284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.151306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.159305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.159327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.167323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.167342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.175347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.175367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.183370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.183389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.191392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.191411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.199420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.199441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.207438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.207457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.215460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.215480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.223482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.223502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.231504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.231523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.239526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.239545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.247554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.247574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.255575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.255595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.263599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.263618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.271618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.271637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.279641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.279660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.287666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.287686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.295687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.295707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.303733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.303767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 Running I/O for 5 seconds... 00:13:13.126 [2024-07-15 12:52:31.311758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.311780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.323158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.323184] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.126 [2024-07-15 12:52:31.332343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.126 [2024-07-15 12:52:31.332385] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.343597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.343622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.355366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.355391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.364760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.364786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.375583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.375609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.387055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.387081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.396164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.396189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.406676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.406700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.416497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.416522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.426707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.426762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.436818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.436843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.446904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.446930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.457264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.457288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.467332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.467356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.477428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.477452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.487843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.487869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.499898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.499924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.509560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.509584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.519971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.519998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.532215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.532240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.542113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.542137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.552243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.552268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.562164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.562187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.572473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.572507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.384 [2024-07-15 12:52:31.582893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.384 [2024-07-15 12:52:31.582919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.595633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.595657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.605258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.605282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.615733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.615766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.625565] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.625596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.635851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.635876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.645971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.645997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.656184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.656209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.666467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.666491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.678392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.678417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.687895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.687920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.698532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.698555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.710652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.710676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.720945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.720971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.731056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.731080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.741333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.741357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.751351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.751375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.761636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.761659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.771908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.771934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.782563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.782587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.795094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.795119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.805280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.805304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.815434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.815457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.825664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.825695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.836141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.836179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.642 [2024-07-15 12:52:31.846539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.642 [2024-07-15 12:52:31.846562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.857562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.857586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.867816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.867844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.878468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.878493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.889473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.889499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.901800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.901826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.911857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.911883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.922673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.922699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.934960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.934986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.945105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.945130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.955794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.955820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.966519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.966543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.977294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.977319] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.987946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.987972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:31.998611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:31.998635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.009212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.009237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.020073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.020097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.032427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.032461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.042551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.042575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.053175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.053200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.063799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.063825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.074512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.074536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.087243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.087267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:13.901 [2024-07-15 12:52:32.097339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:13.901 [2024-07-15 12:52:32.097364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.108633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.108659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.119572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.119596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.130409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.130433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.141010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.141050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.151638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.151663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.162665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.162689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.173396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.173421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.184045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.184071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.194887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.194913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.207194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.207219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.217797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.217824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.228157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.228182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.238579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.238612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.249224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.249249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.263308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.263334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.274074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.274114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.285058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.285083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.296074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.296115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.306877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.306904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.317364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.317389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.329651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.329676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.339273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.339298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.350106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.350132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.160 [2024-07-15 12:52:32.360162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.160 [2024-07-15 12:52:32.360187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.371382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.371406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.383383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.383407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.392379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.392409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.403206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.403230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.413217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.413241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.423670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.423696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.433799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.433825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.444229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.444255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.454568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.454593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.465462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.465488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.477596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.477621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.487134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.487158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.496958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.496984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.506981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.507007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.517259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.517284] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.527558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.527582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.537477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.537501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.547452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.547476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.557435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.557459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.567510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.567535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.577467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.577490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.587343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.587366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.597392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.597416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.607703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.607749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.418 [2024-07-15 12:52:32.620615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.418 [2024-07-15 12:52:32.620640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.631367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.631391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.641192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.641216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.651579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.651603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.661961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.661987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.672240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.672264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.682178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.682202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.692622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.692646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.705075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.705099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.716919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.716945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.726316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.726342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.737220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.737245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.749240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.749264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.758617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.758640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.769377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.769402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.781858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.781884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.792848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.792873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.801845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.801871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.813152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.813176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.825183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.825207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.834446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.834470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.845074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.845111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.855681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.855705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.868555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.868578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.877913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.877940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.692 [2024-07-15 12:52:32.888424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.692 [2024-07-15 12:52:32.888448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.899097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.899121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.911654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.911677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.921113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.921137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.933304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.933330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.944405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.944430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.953139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.953163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.964072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.964111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.975829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.975854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.951 [2024-07-15 12:52:32.985039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.951 [2024-07-15 12:52:32.985064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:32.996050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:32.996075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.006411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.006436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.016571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.016595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.026789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.026815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.036838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.036871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.046797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.046823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.056785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.056810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.067144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.067169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.077269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.077293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.089086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.089125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.098511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.098535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.108711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.108758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.118803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.118830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.129301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.129325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.139301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.139325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:14.952 [2024-07-15 12:52:33.148803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:14.952 [2024-07-15 12:52:33.148828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.159372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.159396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.171611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.171635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.180198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.180222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.192775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.192801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.202623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.202648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.212817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.212843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.222452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.222476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.232530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.232562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.243057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.243081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.253636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.253661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.263521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.263545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.273647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.273671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.283813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.283839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.293433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.293456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.303963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.303995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.316071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.316110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.327207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.327232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.336995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.337036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.347323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.347347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.359219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.359244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.368385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.368411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.379451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.379477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.389922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.389949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.399983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.400010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.209 [2024-07-15 12:52:33.409876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.209 [2024-07-15 12:52:33.409901] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.420568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.420594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.431398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.431428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.441961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.441994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.455068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.455107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.465618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.465643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.475930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.475956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.487829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.487856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.497492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.497517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.508042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.508067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.518512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.518537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.528595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.528620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.538962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.538988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.549454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.549481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.559674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.559698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.570283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.570308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.582751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.582778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.592941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.592968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.603924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.603952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.614454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.614479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.625152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.625177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.637619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.637651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.647782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.647809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.658038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.658063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.467 [2024-07-15 12:52:33.668700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.467 [2024-07-15 12:52:33.668748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.679598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.679622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.692054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.692094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.701943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.701969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.712187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.712213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.722562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.722587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.732664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.732690] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.743206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.743231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.753471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.753495] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.764512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.764537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.775300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.775324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.787461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.787486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.797579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.726 [2024-07-15 12:52:33.797603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.726 [2024-07-15 12:52:33.807918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.807945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.818461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.818486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.830863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.830890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.840038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.840068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.851119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.851143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.861906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.861932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.872344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.872369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.883194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.883218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.893644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.893667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.906057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.906096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.915945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.915971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:15.772 [2024-07-15 12:52:33.926294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:15.772 [2024-07-15 12:52:33.926318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.937252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.937277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.947653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.947678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.957694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.957719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.967999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.968038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.978577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.978602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.989145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.989170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:33.999335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:33.999358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.009337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.009361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.019134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.019158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.029353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.029377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.039236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.039260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.051200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.051225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.060529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.060553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.070896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.070921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.081187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.081211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.091768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.091794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.104033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.104058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.113560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.113590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.125250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.125274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.134784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.134811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.145496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.145520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.155787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.155814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.165957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.165984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.175941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.175967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.186211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.186236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.196464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.196488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.206820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.206845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.219011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.219050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.031 [2024-07-15 12:52:34.228517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.031 [2024-07-15 12:52:34.228541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.239203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.239228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.250040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.250065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.260033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.260058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.269853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.269880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.280131] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.280155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.290562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.290586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.302670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.302694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.312455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.312479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.322635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.322658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.332627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.332651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.342803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.342829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.352570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.352594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.289 [2024-07-15 12:52:34.364378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.289 [2024-07-15 12:52:34.364401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.373685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.373709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.384013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.384055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.394453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.394477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.406227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.406251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.415762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.415788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.426040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.426064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.438182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.438206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.446922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.446947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.456760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.456785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.466830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.466855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.477026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.477050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.290 [2024-07-15 12:52:34.487094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.290 [2024-07-15 12:52:34.487118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.497839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.497864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.510346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.510370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.519702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.519748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.530198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.530223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.540249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.540272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.550206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.550231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.560115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.560140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.570296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.570321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.579614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.579639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.589684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.589707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.599817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.599842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.609885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.609910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.619669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.619701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.629666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.629691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.639625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.639649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.649772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.649799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.659660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.659685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.669478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.669502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.679541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.679565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.689519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.689543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.699760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.699785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.710162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.710187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.720225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.720249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.730359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.547 [2024-07-15 12:52:34.730384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.547 [2024-07-15 12:52:34.740819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.548 [2024-07-15 12:52:34.740847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.548 [2024-07-15 12:52:34.754187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.548 [2024-07-15 12:52:34.754213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.764616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.764641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.775112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.775137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.787149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.787174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.796291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.796316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.807006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.807045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.818573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.818604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.827708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.827756] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.837977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.838003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.847683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.847711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.859402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.859426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.869126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.869151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.878879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.878907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.889218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.805 [2024-07-15 12:52:34.889243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.805 [2024-07-15 12:52:34.899703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.899750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.909924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.909951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.920000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.920044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.929901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.929928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.939636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.939660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.949822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.949848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.959971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.959998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.969885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.969911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.979542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.979566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.989184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.989208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:34.999186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:34.999209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:16.806 [2024-07-15 12:52:35.009280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:16.806 [2024-07-15 12:52:35.009314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.064 [2024-07-15 12:52:35.021704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.064 [2024-07-15 12:52:35.021751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.064 [2024-07-15 12:52:35.032935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.064 [2024-07-15 12:52:35.032961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.041252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.041276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.053625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.053649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.065222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.065246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.074232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.074256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.084959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.084984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.094845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.094870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.105048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.105074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.115114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.115138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.124864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.124890] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.134623] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.134648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.144576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.144600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.154078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.154118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.163617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.163641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.173247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.173271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.183678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.183702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.196459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.196482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.206211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.206242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.217986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.218011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.227848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.227874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.237825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.237850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.247641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.247665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.257906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.257931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.065 [2024-07-15 12:52:35.270364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.065 [2024-07-15 12:52:35.270389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.280300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.280324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.290347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.290371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.300268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.300293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.309760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.309800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.319510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.319534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.329440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.329464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.339517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.339541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.349664] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.349688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.359570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.359593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.373116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.373140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.382591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.382615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.393332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.393356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.410736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.410790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.421291] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.421315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.431555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.324 [2024-07-15 12:52:35.431579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.324 [2024-07-15 12:52:35.441983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.442008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.452008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.452047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.461977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.462003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.472419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.472445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.482950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.482975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.493146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.493170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.505353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.505377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.514589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.514614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.325 [2024-07-15 12:52:35.527394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.325 [2024-07-15 12:52:35.527421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.538165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.538190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.548411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.548435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.558877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.558903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.568639] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.568663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.578900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.578925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.592364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.592388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.601517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.601541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.612120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.612145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.622192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.622216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.632327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.632351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.642130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.642155] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.652268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.652292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.662569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.662592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.675274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.675298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.684758] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.684783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.694976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.695001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.705329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.705353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.717162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.717187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.726617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.726641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.736257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.736281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.745915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.745941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.756012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.756052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.766202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.766226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.776287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.776312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.584 [2024-07-15 12:52:35.786527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.584 [2024-07-15 12:52:35.786566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.797337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.797361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.809127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.809152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.818605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.818629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.830808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.830835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.840876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.840903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.850897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.850925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.861135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.861160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.871376] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.871401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.881688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.881713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.892532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.892559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.904735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.904769] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.914457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.843 [2024-07-15 12:52:35.914481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.843 [2024-07-15 12:52:35.924619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.924644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.935049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.935074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.947169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.947194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.956259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.956283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.966157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.966183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.976484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.976510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.986797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.986825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:35.997189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:35.997213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:36.007475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:36.007499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:36.017787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:36.017813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:36.027533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:36.027557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:36.037776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:36.037816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:17.844 [2024-07-15 12:52:36.050088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:17.844 [2024-07-15 12:52:36.050114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.059580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.059604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.069821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.069847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.079787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.079814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.090133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.090158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.100124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.100148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.110636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.110660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.122789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.122815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.132136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.132160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.142613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.142637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.154695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.154719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.164040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.164066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.174280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.174305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.184983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.185008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.195166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.195197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.205319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.205342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.217850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.217874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.227213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.227238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.237914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.237940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.247673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.247697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.257700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.257748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.267602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.267633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.278007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.278046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.287830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.287855] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.297679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.297703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.103 [2024-07-15 12:52:36.307903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.103 [2024-07-15 12:52:36.307930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.318214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.318238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.327681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.327704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.332903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.332927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 00:13:18.362 Latency(us) 00:13:18.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.362 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:18.362 Nvme1n1 : 5.01 12370.95 96.65 0.00 0.00 10332.84 4393.34 20874.43 00:13:18.362 =================================================================================================================== 00:13:18.362 Total : 12370.95 96.65 0.00 0.00 10332.84 4393.34 20874.43 00:13:18.362 [2024-07-15 12:52:36.340956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.340980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.348940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.348969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.356994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.357029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.365056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.365110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.373077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.373128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.381093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.381142] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.389109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.389156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.397137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.397187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.405155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.405206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.413176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.413226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.421207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.421254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.429232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.429287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.437247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.437298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.445263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.445309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.453289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.453340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.461312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.461362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.469333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.469380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.477293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.477317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.485305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.485325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.493326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.362 [2024-07-15 12:52:36.493346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.362 [2024-07-15 12:52:36.501346] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.501374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.509403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.509437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.517456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.517504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.525479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.525525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.533438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.533460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.541456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.541476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.549477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.549497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.557497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.557517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.363 [2024-07-15 12:52:36.565574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.363 [2024-07-15 12:52:36.565618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 [2024-07-15 12:52:36.573615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.621 [2024-07-15 12:52:36.573663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 [2024-07-15 12:52:36.581633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.621 [2024-07-15 12:52:36.581680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 [2024-07-15 12:52:36.589587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.621 [2024-07-15 12:52:36.589607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 [2024-07-15 12:52:36.597605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.621 [2024-07-15 12:52:36.597625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 [2024-07-15 12:52:36.605627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:18.621 [2024-07-15 12:52:36.605646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:18.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3370241) - No such process 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3370241 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 delay0 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.621 12:52:36 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:18.621 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.621 [2024-07-15 12:52:36.728684] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:25.190 Initializing NVMe Controllers 00:13:25.190 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:25.190 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:25.190 Initialization complete. Launching workers. 00:13:25.190 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 13589 00:13:25.190 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 13768, failed to submit 89 00:13:25.190 success 13661, unsuccess 107, failed 0 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.190 rmmod nvme_tcp 00:13:25.190 rmmod nvme_fabrics 00:13:25.190 rmmod nvme_keyring 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3368905 ']' 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3368905 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3368905 ']' 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3368905 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3368905 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3368905' 00:13:25.190 killing process with pid 3368905 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3368905 00:13:25.190 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3368905 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:25.459 12:52:43 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.363 12:52:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:27.363 00:13:27.363 real 0m28.185s 00:13:27.363 user 0m39.902s 00:13:27.363 sys 0m10.222s 00:13:27.363 12:52:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.363 12:52:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:27.363 ************************************ 00:13:27.363 END TEST nvmf_zcopy 00:13:27.363 ************************************ 00:13:27.363 12:52:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:27.363 12:52:45 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:27.363 12:52:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.363 12:52:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.363 12:52:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:27.622 ************************************ 00:13:27.622 START TEST nvmf_nmic 00:13:27.622 ************************************ 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:27.622 * Looking for test storage... 00:13:27.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:27.622 12:52:45 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:30.158 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:30.158 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:30.158 Found net devices under 0000:84:00.0: cvl_0_0 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:30.158 Found net devices under 0000:84:00.1: cvl_0_1 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:30.158 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:30.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:30.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:13:30.159 00:13:30.159 --- 10.0.0.2 ping statistics --- 00:13:30.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.159 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:30.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:30.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:30.159 00:13:30.159 --- 10.0.0.1 ping statistics --- 00:13:30.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:30.159 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3373642 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3373642 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3373642 ']' 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.159 12:52:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 [2024-07-15 12:52:47.985656] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:13:30.159 [2024-07-15 12:52:47.985760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.159 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.159 [2024-07-15 12:52:48.053854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:30.159 [2024-07-15 12:52:48.167012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.159 [2024-07-15 12:52:48.167083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.159 [2024-07-15 12:52:48.167111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.159 [2024-07-15 12:52:48.167123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.159 [2024-07-15 12:52:48.167133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.159 [2024-07-15 12:52:48.167182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.159 [2024-07-15 12:52:48.167208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.159 [2024-07-15 12:52:48.167266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.159 [2024-07-15 12:52:48.167269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 [2024-07-15 12:52:48.321686] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 Malloc0 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.159 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 [2024-07-15 12:52:48.373516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:30.417 test case1: single bdev can't be used in multiple subsystems 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.417 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.417 [2024-07-15 12:52:48.397390] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:30.417 [2024-07-15 12:52:48.397427] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:30.417 [2024-07-15 12:52:48.397456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:30.417 request: 00:13:30.417 { 00:13:30.417 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:30.417 "namespace": { 00:13:30.417 "bdev_name": "Malloc0", 00:13:30.418 "no_auto_visible": false 00:13:30.418 }, 00:13:30.418 "method": "nvmf_subsystem_add_ns", 00:13:30.418 "req_id": 1 00:13:30.418 } 00:13:30.418 Got JSON-RPC error response 00:13:30.418 response: 00:13:30.418 { 00:13:30.418 "code": -32602, 00:13:30.418 "message": "Invalid parameters" 00:13:30.418 } 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:30.418 Adding namespace failed - expected result. 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:30.418 test case2: host connect to nvmf target in multiple paths 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:30.418 [2024-07-15 12:52:48.405503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.418 12:52:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.983 12:52:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:31.550 12:52:49 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.550 12:52:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:31.550 12:52:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.550 12:52:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:31.550 12:52:49 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:34.131 12:52:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:34.131 [global] 00:13:34.131 thread=1 00:13:34.131 invalidate=1 00:13:34.131 rw=write 00:13:34.131 time_based=1 00:13:34.131 runtime=1 00:13:34.131 ioengine=libaio 00:13:34.131 direct=1 00:13:34.131 bs=4096 00:13:34.131 iodepth=1 00:13:34.131 norandommap=0 00:13:34.131 numjobs=1 00:13:34.131 00:13:34.131 verify_dump=1 00:13:34.131 verify_backlog=512 00:13:34.131 verify_state_save=0 00:13:34.131 do_verify=1 00:13:34.131 verify=crc32c-intel 00:13:34.131 [job0] 00:13:34.131 filename=/dev/nvme0n1 00:13:34.131 Could not set queue depth (nvme0n1) 00:13:34.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:34.131 fio-3.35 00:13:34.131 Starting 1 thread 00:13:35.128 00:13:35.128 job0: (groupid=0, jobs=1): err= 0: pid=3374279: Mon Jul 15 12:52:53 2024 00:13:35.128 read: IOPS=520, BW=2081KiB/s (2131kB/s)(2100KiB/1009msec) 00:13:35.128 slat (nsec): min=7272, max=36921, avg=12364.85, stdev=3855.37 00:13:35.128 clat (usec): min=259, max=41953, avg=1317.04, stdev=6357.47 00:13:35.128 lat (usec): min=267, max=41986, avg=1329.41, stdev=6359.74 00:13:35.128 clat percentiles (usec): 00:13:35.128 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 289], 00:13:35.128 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:13:35.128 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 343], 00:13:35.128 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:13:35.128 | 99.99th=[42206] 00:13:35.128 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:13:35.128 slat (usec): min=8, max=41717, avg=85.50, stdev=1601.17 00:13:35.128 clat (usec): min=136, max=435, avg=212.38, stdev=56.52 00:13:35.128 lat (usec): min=146, max=41925, avg=297.88, stdev=1602.75 00:13:35.128 clat percentiles (usec): 00:13:35.128 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 163], 00:13:35.128 | 30.00th=[ 178], 40.00th=[ 190], 50.00th=[ 202], 60.00th=[ 221], 00:13:35.128 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 351], 00:13:35.128 | 99.00th=[ 408], 99.50th=[ 416], 99.90th=[ 429], 99.95th=[ 437], 00:13:35.128 | 99.99th=[ 437] 00:13:35.128 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:13:35.128 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:35.128 lat (usec) : 250=58.30%, 500=40.80%, 750=0.06% 00:13:35.128 lat (msec) : 50=0.84% 00:13:35.128 cpu : usr=1.59%, sys=2.78%, ctx=1553, majf=0, minf=2 00:13:35.128 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:35.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:35.128 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:35.128 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:35.128 00:13:35.128 Run status group 0 (all jobs): 00:13:35.128 READ: bw=2081KiB/s (2131kB/s), 2081KiB/s-2081KiB/s (2131kB/s-2131kB/s), io=2100KiB (2150kB), run=1009-1009msec 00:13:35.128 WRITE: bw=4059KiB/s (4157kB/s), 4059KiB/s-4059KiB/s (4157kB/s-4157kB/s), io=4096KiB (4194kB), run=1009-1009msec 00:13:35.128 00:13:35.128 Disk stats (read/write): 00:13:35.128 nvme0n1: ios=547/1024, merge=0/0, ticks=1531/208, in_queue=1739, util=99.70% 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:35.128 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.128 rmmod nvme_tcp 00:13:35.128 rmmod nvme_fabrics 00:13:35.128 rmmod nvme_keyring 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3373642 ']' 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3373642 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3373642 ']' 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3373642 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3373642 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3373642' 00:13:35.128 killing process with pid 3373642 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3373642 00:13:35.128 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3373642 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:35.388 12:52:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.923 12:52:55 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:37.923 00:13:37.923 real 0m10.026s 00:13:37.923 user 0m22.350s 00:13:37.923 sys 0m2.412s 00:13:37.923 12:52:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.923 12:52:55 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:37.923 ************************************ 00:13:37.923 END TEST nvmf_nmic 00:13:37.923 ************************************ 00:13:37.923 12:52:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:37.923 12:52:55 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:37.923 12:52:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.923 12:52:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.923 12:52:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.923 ************************************ 00:13:37.923 START TEST nvmf_fio_target 00:13:37.923 ************************************ 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:37.923 * Looking for test storage... 00:13:37.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.923 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:37.924 12:52:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:39.824 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:39.824 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:39.824 Found net devices under 0000:84:00.0: cvl_0_0 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:39.824 Found net devices under 0000:84:00.1: cvl_0_1 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:39.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:39.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:13:39.824 00:13:39.824 --- 10.0.0.2 ping statistics --- 00:13:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.824 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:39.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:39.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:13:39.824 00:13:39.824 --- 10.0.0.1 ping statistics --- 00:13:39.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:39.824 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:39.824 12:52:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3376376 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3376376 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3376376 ']' 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.825 12:52:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.825 [2024-07-15 12:52:58.014574] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:13:39.825 [2024-07-15 12:52:58.014643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.083 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.083 [2024-07-15 12:52:58.077218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.083 [2024-07-15 12:52:58.181227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.083 [2024-07-15 12:52:58.181297] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.083 [2024-07-15 12:52:58.181311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.083 [2024-07-15 12:52:58.181321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.083 [2024-07-15 12:52:58.181330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.083 [2024-07-15 12:52:58.181454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.083 [2024-07-15 12:52:58.181560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.083 [2024-07-15 12:52:58.181632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.083 [2024-07-15 12:52:58.181635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:40.340 12:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:40.597 [2024-07-15 12:52:58.600599] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:40.597 12:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:40.855 12:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:40.855 12:52:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.114 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:41.114 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.372 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:41.372 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:41.629 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:41.629 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:41.887 12:52:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.144 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:42.144 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.402 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:42.402 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.660 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:42.660 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:42.916 12:53:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:43.173 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:43.173 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:43.430 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:43.430 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:43.687 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.945 [2024-07-15 12:53:01.954654] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.945 12:53:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:44.203 12:53:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:44.460 12:53:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.025 12:53:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:45.026 12:53:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:45.026 12:53:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.026 12:53:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:45.026 12:53:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:45.026 12:53:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:46.922 12:53:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:46.922 [global] 00:13:46.922 thread=1 00:13:46.922 invalidate=1 00:13:46.922 rw=write 00:13:46.922 time_based=1 00:13:46.922 runtime=1 00:13:46.922 ioengine=libaio 00:13:46.922 direct=1 00:13:46.922 bs=4096 00:13:46.922 iodepth=1 00:13:46.922 norandommap=0 00:13:46.922 numjobs=1 00:13:46.922 00:13:46.922 verify_dump=1 00:13:46.922 verify_backlog=512 00:13:46.922 verify_state_save=0 00:13:46.922 do_verify=1 00:13:46.922 verify=crc32c-intel 00:13:46.922 [job0] 00:13:46.922 filename=/dev/nvme0n1 00:13:46.922 [job1] 00:13:46.922 filename=/dev/nvme0n2 00:13:46.922 [job2] 00:13:46.922 filename=/dev/nvme0n3 00:13:46.922 [job3] 00:13:46.922 filename=/dev/nvme0n4 00:13:47.180 Could not set queue depth (nvme0n1) 00:13:47.180 Could not set queue depth (nvme0n2) 00:13:47.180 Could not set queue depth (nvme0n3) 00:13:47.180 Could not set queue depth (nvme0n4) 00:13:47.180 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.180 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.180 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.180 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:47.180 fio-3.35 00:13:47.180 Starting 4 threads 00:13:48.559 00:13:48.559 job0: (groupid=0, jobs=1): err= 0: pid=3377359: Mon Jul 15 12:53:06 2024 00:13:48.559 read: IOPS=1711, BW=6845KiB/s (7009kB/s)(6852KiB/1001msec) 00:13:48.559 slat (nsec): min=6337, max=36843, avg=7693.66, stdev=1972.81 00:13:48.559 clat (usec): min=197, max=41066, avg=338.47, stdev=1963.25 00:13:48.559 lat (usec): min=203, max=41080, avg=346.17, stdev=1964.00 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:13:48.559 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:13:48.559 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 285], 00:13:48.559 | 99.00th=[ 416], 99.50th=[ 510], 99.90th=[41157], 99.95th=[41157], 00:13:48.559 | 99.99th=[41157] 00:13:48.559 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:48.559 slat (nsec): min=8165, max=53658, avg=11712.33, stdev=5912.31 00:13:48.559 clat (usec): min=125, max=462, avg=182.28, stdev=44.15 00:13:48.559 lat (usec): min=134, max=473, avg=193.99, stdev=48.16 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:13:48.559 | 30.00th=[ 155], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 174], 00:13:48.559 | 70.00th=[ 186], 80.00th=[ 210], 90.00th=[ 253], 95.00th=[ 273], 00:13:48.559 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 412], 99.95th=[ 420], 00:13:48.559 | 99.99th=[ 461] 00:13:48.559 bw ( KiB/s): min= 7008, max= 7008, per=34.50%, avg=7008.00, stdev= 0.00, samples=1 00:13:48.559 iops : min= 1752, max= 1752, avg=1752.00, stdev= 0.00, samples=1 00:13:48.559 lat (usec) : 250=80.27%, 500=19.46%, 750=0.16% 00:13:48.559 lat (msec) : 50=0.11% 00:13:48.559 cpu : usr=3.10%, sys=4.40%, ctx=3763, majf=0, minf=2 00:13:48.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 issued rwts: total=1713,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.559 job1: (groupid=0, jobs=1): err= 0: pid=3377384: Mon Jul 15 12:53:06 2024 00:13:48.559 read: IOPS=20, BW=83.7KiB/s (85.8kB/s)(84.0KiB/1003msec) 00:13:48.559 slat (nsec): min=9363, max=33067, avg=15683.86, stdev=5031.24 00:13:48.559 clat (usec): min=40901, max=41992, avg=41053.95, stdev=240.50 00:13:48.559 lat (usec): min=40917, max=42005, avg=41069.63, stdev=239.23 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:48.559 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:48.559 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:48.559 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:48.559 | 99.99th=[42206] 00:13:48.559 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:13:48.559 slat (nsec): min=7323, max=73422, avg=15872.24, stdev=9282.89 00:13:48.559 clat (usec): min=153, max=486, avg=253.95, stdev=53.85 00:13:48.559 lat (usec): min=161, max=510, avg=269.82, stdev=59.32 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 192], 20.00th=[ 212], 00:13:48.559 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 255], 00:13:48.559 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 355], 00:13:48.559 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 486], 99.95th=[ 486], 00:13:48.559 | 99.99th=[ 486] 00:13:48.559 bw ( KiB/s): min= 4096, max= 4096, per=20.16%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.559 lat (usec) : 250=54.22%, 500=41.84% 00:13:48.559 lat (msec) : 50=3.94% 00:13:48.559 cpu : usr=0.30%, sys=1.20%, ctx=533, majf=0, minf=1 00:13:48.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.559 job2: (groupid=0, jobs=1): err= 0: pid=3377426: Mon Jul 15 12:53:06 2024 00:13:48.559 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:48.559 slat (nsec): min=4599, max=32222, avg=10729.70, stdev=4626.63 00:13:48.559 clat (usec): min=218, max=634, avg=305.21, stdev=44.57 00:13:48.559 lat (usec): min=224, max=648, avg=315.94, stdev=45.65 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 237], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:13:48.559 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:13:48.559 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 375], 00:13:48.559 | 99.00th=[ 469], 99.50th=[ 510], 99.90th=[ 619], 99.95th=[ 635], 00:13:48.559 | 99.99th=[ 635] 00:13:48.559 write: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec); 0 zone resets 00:13:48.559 slat (usec): min=5, max=100, avg=13.59, stdev=10.66 00:13:48.559 clat (usec): min=152, max=1836, avg=232.75, stdev=84.22 00:13:48.559 lat (usec): min=159, max=1866, avg=246.35, stdev=91.14 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 178], 00:13:48.559 | 30.00th=[ 188], 40.00th=[ 196], 50.00th=[ 206], 60.00th=[ 217], 00:13:48.559 | 70.00th=[ 231], 80.00th=[ 269], 90.00th=[ 347], 95.00th=[ 420], 00:13:48.559 | 99.00th=[ 474], 99.50th=[ 490], 99.90th=[ 545], 99.95th=[ 562], 00:13:48.559 | 99.99th=[ 1844] 00:13:48.559 bw ( KiB/s): min= 8192, max= 8192, per=40.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:48.559 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:48.559 lat (usec) : 250=46.03%, 500=53.55%, 750=0.39% 00:13:48.559 lat (msec) : 2=0.03% 00:13:48.559 cpu : usr=1.40%, sys=5.30%, ctx=3578, majf=0, minf=1 00:13:48.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 issued rwts: total=1536,2042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.559 job3: (groupid=0, jobs=1): err= 0: pid=3377440: Mon Jul 15 12:53:06 2024 00:13:48.559 read: IOPS=476, BW=1907KiB/s (1952kB/s)(1920KiB/1007msec) 00:13:48.559 slat (nsec): min=7744, max=59449, avg=9417.71, stdev=3628.91 00:13:48.559 clat (usec): min=227, max=41181, avg=1788.38, stdev=7734.20 00:13:48.559 lat (usec): min=235, max=41196, avg=1797.80, stdev=7735.88 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:13:48.559 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:13:48.559 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 322], 00:13:48.559 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:48.559 | 99.99th=[41157] 00:13:48.559 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:13:48.559 slat (nsec): min=7824, max=60270, avg=17040.52, stdev=9146.61 00:13:48.559 clat (usec): min=161, max=582, avg=255.65, stdev=59.45 00:13:48.559 lat (usec): min=171, max=605, avg=272.69, stdev=63.52 00:13:48.559 clat percentiles (usec): 00:13:48.559 | 1.00th=[ 176], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 217], 00:13:48.559 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:13:48.559 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 318], 95.00th=[ 408], 00:13:48.559 | 99.00th=[ 445], 99.50th=[ 494], 99.90th=[ 586], 99.95th=[ 586], 00:13:48.559 | 99.99th=[ 586] 00:13:48.559 bw ( KiB/s): min= 4096, max= 4096, per=20.16%, avg=4096.00, stdev= 0.00, samples=1 00:13:48.559 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:48.559 lat (usec) : 250=41.94%, 500=56.05%, 750=0.20% 00:13:48.559 lat (msec) : 50=1.81% 00:13:48.559 cpu : usr=0.89%, sys=1.39%, ctx=992, majf=0, minf=1 00:13:48.559 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.559 issued rwts: total=480,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.559 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.559 00:13:48.559 Run status group 0 (all jobs): 00:13:48.559 READ: bw=14.5MiB/s (15.3MB/s), 83.7KiB/s-6845KiB/s (85.8kB/s-7009kB/s), io=14.6MiB (15.4MB), run=1001-1007msec 00:13:48.559 WRITE: bw=19.8MiB/s (20.8MB/s), 2034KiB/s-8184KiB/s (2083kB/s-8380kB/s), io=20.0MiB (20.9MB), run=1001-1007msec 00:13:48.559 00:13:48.559 Disk stats (read/write): 00:13:48.559 nvme0n1: ios=1446/1536, merge=0/0, ticks=845/272, in_queue=1117, util=84.77% 00:13:48.559 nvme0n2: ios=66/512, merge=0/0, ticks=726/115, in_queue=841, util=88.85% 00:13:48.559 nvme0n3: ios=1362/1536, merge=0/0, ticks=494/376, in_queue=870, util=93.20% 00:13:48.559 nvme0n4: ios=532/512, merge=0/0, ticks=749/123, in_queue=872, util=95.91% 00:13:48.559 12:53:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:48.559 [global] 00:13:48.559 thread=1 00:13:48.559 invalidate=1 00:13:48.559 rw=randwrite 00:13:48.559 time_based=1 00:13:48.559 runtime=1 00:13:48.559 ioengine=libaio 00:13:48.559 direct=1 00:13:48.559 bs=4096 00:13:48.559 iodepth=1 00:13:48.559 norandommap=0 00:13:48.559 numjobs=1 00:13:48.559 00:13:48.559 verify_dump=1 00:13:48.559 verify_backlog=512 00:13:48.559 verify_state_save=0 00:13:48.559 do_verify=1 00:13:48.559 verify=crc32c-intel 00:13:48.559 [job0] 00:13:48.559 filename=/dev/nvme0n1 00:13:48.559 [job1] 00:13:48.559 filename=/dev/nvme0n2 00:13:48.559 [job2] 00:13:48.559 filename=/dev/nvme0n3 00:13:48.559 [job3] 00:13:48.560 filename=/dev/nvme0n4 00:13:48.560 Could not set queue depth (nvme0n1) 00:13:48.560 Could not set queue depth (nvme0n2) 00:13:48.560 Could not set queue depth (nvme0n3) 00:13:48.560 Could not set queue depth (nvme0n4) 00:13:48.817 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.817 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.817 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.817 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:48.817 fio-3.35 00:13:48.817 Starting 4 threads 00:13:50.184 00:13:50.184 job0: (groupid=0, jobs=1): err= 0: pid=3377679: Mon Jul 15 12:53:07 2024 00:13:50.184 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:50.184 slat (nsec): min=6423, max=46274, avg=13691.61, stdev=5981.00 00:13:50.184 clat (usec): min=220, max=875, avg=360.20, stdev=68.44 00:13:50.184 lat (usec): min=229, max=884, avg=373.89, stdev=70.39 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 245], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:13:50.184 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 351], 60.00th=[ 367], 00:13:50.184 | 70.00th=[ 379], 80.00th=[ 400], 90.00th=[ 445], 95.00th=[ 498], 00:13:50.184 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[ 709], 99.95th=[ 873], 00:13:50.184 | 99.99th=[ 873] 00:13:50.184 write: IOPS=1691, BW=6765KiB/s (6928kB/s)(6772KiB/1001msec); 0 zone resets 00:13:50.184 slat (nsec): min=8293, max=55912, avg=13998.72, stdev=6842.37 00:13:50.184 clat (usec): min=151, max=468, avg=229.70, stdev=43.16 00:13:50.184 lat (usec): min=160, max=493, avg=243.70, stdev=45.67 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 196], 00:13:50.184 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 231], 00:13:50.184 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 285], 95.00th=[ 318], 00:13:50.184 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 469], 00:13:50.184 | 99.99th=[ 469] 00:13:50.184 bw ( KiB/s): min= 8192, max= 8192, per=35.02%, avg=8192.00, stdev= 0.00, samples=1 00:13:50.184 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:50.184 lat (usec) : 250=40.97%, 500=56.77%, 750=2.23%, 1000=0.03% 00:13:50.184 cpu : usr=3.60%, sys=6.10%, ctx=3229, majf=0, minf=1 00:13:50.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 issued rwts: total=1536,1693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.184 job1: (groupid=0, jobs=1): err= 0: pid=3377680: Mon Jul 15 12:53:07 2024 00:13:50.184 read: IOPS=1587, BW=6350KiB/s (6502kB/s)(6356KiB/1001msec) 00:13:50.184 slat (nsec): min=7544, max=43912, avg=11879.17, stdev=4734.68 00:13:50.184 clat (usec): min=226, max=504, avg=306.82, stdev=50.21 00:13:50.184 lat (usec): min=234, max=514, avg=318.70, stdev=52.31 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:13:50.184 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 310], 00:13:50.184 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 396], 00:13:50.184 | 99.00th=[ 445], 99.50th=[ 478], 99.90th=[ 494], 99.95th=[ 506], 00:13:50.184 | 99.99th=[ 506] 00:13:50.184 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:13:50.184 slat (nsec): min=8540, max=53253, avg=16094.58, stdev=6880.09 00:13:50.184 clat (usec): min=150, max=3218, avg=217.66, stdev=83.61 00:13:50.184 lat (usec): min=160, max=3233, avg=233.75, stdev=85.34 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 169], 20.00th=[ 180], 00:13:50.184 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 215], 60.00th=[ 229], 00:13:50.184 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 273], 00:13:50.184 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 416], 99.95th=[ 1844], 00:13:50.184 | 99.99th=[ 3228] 00:13:50.184 bw ( KiB/s): min= 8192, max= 8192, per=35.02%, avg=8192.00, stdev= 0.00, samples=1 00:13:50.184 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:50.184 lat (usec) : 250=48.58%, 500=51.33%, 750=0.03% 00:13:50.184 lat (msec) : 2=0.03%, 4=0.03% 00:13:50.184 cpu : usr=3.00%, sys=8.10%, ctx=3637, majf=0, minf=2 00:13:50.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 issued rwts: total=1589,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.184 job2: (groupid=0, jobs=1): err= 0: pid=3377681: Mon Jul 15 12:53:07 2024 00:13:50.184 read: IOPS=22, BW=91.5KiB/s (93.7kB/s)(92.0KiB/1005msec) 00:13:50.184 slat (nsec): min=10483, max=44926, avg=24359.74, stdev=10517.96 00:13:50.184 clat (usec): min=287, max=41077, avg=38049.94, stdev=9834.90 00:13:50.184 lat (usec): min=304, max=41091, avg=38074.30, stdev=9837.98 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 289], 5.00th=[15139], 10.00th=[40633], 20.00th=[40633], 00:13:50.184 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:50.184 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:50.184 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:50.184 | 99.99th=[41157] 00:13:50.184 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:13:50.184 slat (nsec): min=8044, max=49628, avg=15437.16, stdev=8192.30 00:13:50.184 clat (usec): min=177, max=329, avg=231.98, stdev=29.44 00:13:50.184 lat (usec): min=189, max=343, avg=247.41, stdev=30.62 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 186], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:13:50.184 | 30.00th=[ 210], 40.00th=[ 225], 50.00th=[ 235], 60.00th=[ 241], 00:13:50.184 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 285], 00:13:50.184 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 330], 99.95th=[ 330], 00:13:50.184 | 99.99th=[ 330] 00:13:50.184 bw ( KiB/s): min= 4087, max= 4087, per=17.47%, avg=4087.00, stdev= 0.00, samples=1 00:13:50.184 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:13:50.184 lat (usec) : 250=73.08%, 500=22.80% 00:13:50.184 lat (msec) : 20=0.19%, 50=3.93% 00:13:50.184 cpu : usr=0.60%, sys=0.50%, ctx=537, majf=0, minf=1 00:13:50.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.184 job3: (groupid=0, jobs=1): err= 0: pid=3377682: Mon Jul 15 12:53:07 2024 00:13:50.184 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:50.184 slat (nsec): min=7945, max=40944, avg=14712.86, stdev=5471.89 00:13:50.184 clat (usec): min=228, max=674, avg=373.88, stdev=80.85 00:13:50.184 lat (usec): min=239, max=696, avg=388.60, stdev=82.98 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 306], 00:13:50.184 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 375], 00:13:50.184 | 70.00th=[ 396], 80.00th=[ 441], 90.00th=[ 502], 95.00th=[ 537], 00:13:50.184 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 660], 99.95th=[ 676], 00:13:50.184 | 99.99th=[ 676] 00:13:50.184 write: IOPS=1622, BW=6490KiB/s (6645kB/s)(6496KiB/1001msec); 0 zone resets 00:13:50.184 slat (nsec): min=9506, max=54624, avg=14585.96, stdev=5619.71 00:13:50.184 clat (usec): min=160, max=592, avg=225.34, stdev=42.59 00:13:50.184 lat (usec): min=178, max=603, avg=239.92, stdev=44.80 00:13:50.184 clat percentiles (usec): 00:13:50.184 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:13:50.184 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 225], 00:13:50.184 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 285], 95.00th=[ 314], 00:13:50.184 | 99.00th=[ 363], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 594], 00:13:50.184 | 99.99th=[ 594] 00:13:50.184 bw ( KiB/s): min= 8175, max= 8175, per=34.95%, avg=8175.00, stdev= 0.00, samples=1 00:13:50.184 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:13:50.184 lat (usec) : 250=41.14%, 500=53.86%, 750=5.00% 00:13:50.184 cpu : usr=3.60%, sys=6.20%, ctx=3160, majf=0, minf=1 00:13:50.184 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.184 issued rwts: total=1536,1624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.184 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.184 00:13:50.184 Run status group 0 (all jobs): 00:13:50.184 READ: bw=18.2MiB/s (19.1MB/s), 91.5KiB/s-6350KiB/s (93.7kB/s-6502kB/s), io=18.3MiB (19.2MB), run=1001-1005msec 00:13:50.184 WRITE: bw=22.8MiB/s (24.0MB/s), 2038KiB/s-8184KiB/s (2087kB/s-8380kB/s), io=23.0MiB (24.1MB), run=1001-1005msec 00:13:50.184 00:13:50.184 Disk stats (read/write): 00:13:50.184 nvme0n1: ios=1306/1536, merge=0/0, ticks=442/342, in_queue=784, util=86.27% 00:13:50.184 nvme0n2: ios=1584/1536, merge=0/0, ticks=516/322, in_queue=838, util=90.55% 00:13:50.184 nvme0n3: ios=64/512, merge=0/0, ticks=1378/115, in_queue=1493, util=97.28% 00:13:50.184 nvme0n4: ios=1245/1536, merge=0/0, ticks=439/331, in_queue=770, util=89.55% 00:13:50.184 12:53:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:50.184 [global] 00:13:50.184 thread=1 00:13:50.184 invalidate=1 00:13:50.184 rw=write 00:13:50.184 time_based=1 00:13:50.184 runtime=1 00:13:50.184 ioengine=libaio 00:13:50.184 direct=1 00:13:50.184 bs=4096 00:13:50.184 iodepth=128 00:13:50.184 norandommap=0 00:13:50.184 numjobs=1 00:13:50.184 00:13:50.184 verify_dump=1 00:13:50.184 verify_backlog=512 00:13:50.184 verify_state_save=0 00:13:50.184 do_verify=1 00:13:50.184 verify=crc32c-intel 00:13:50.184 [job0] 00:13:50.184 filename=/dev/nvme0n1 00:13:50.184 [job1] 00:13:50.184 filename=/dev/nvme0n2 00:13:50.184 [job2] 00:13:50.184 filename=/dev/nvme0n3 00:13:50.184 [job3] 00:13:50.184 filename=/dev/nvme0n4 00:13:50.184 Could not set queue depth (nvme0n1) 00:13:50.184 Could not set queue depth (nvme0n2) 00:13:50.184 Could not set queue depth (nvme0n3) 00:13:50.184 Could not set queue depth (nvme0n4) 00:13:50.184 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.184 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.184 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.184 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:50.184 fio-3.35 00:13:50.184 Starting 4 threads 00:13:51.574 00:13:51.574 job0: (groupid=0, jobs=1): err= 0: pid=3377911: Mon Jul 15 12:53:09 2024 00:13:51.574 read: IOPS=3311, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:13:51.574 slat (usec): min=2, max=16513, avg=131.94, stdev=850.50 00:13:51.574 clat (usec): min=2352, max=85026, avg=15813.24, stdev=7990.48 00:13:51.574 lat (usec): min=6270, max=85036, avg=15945.18, stdev=8105.23 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 6849], 5.00th=[10421], 10.00th=[11338], 20.00th=[11994], 00:13:51.574 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13960], 60.00th=[14877], 00:13:51.574 | 70.00th=[15926], 80.00th=[16909], 90.00th=[22152], 95.00th=[28443], 00:13:51.574 | 99.00th=[57410], 99.50th=[78119], 99.90th=[85459], 99.95th=[85459], 00:13:51.574 | 99.99th=[85459] 00:13:51.574 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:13:51.574 slat (usec): min=3, max=24973, avg=145.60, stdev=1083.91 00:13:51.574 clat (usec): min=2581, max=85386, avg=20345.34, stdev=13225.95 00:13:51.574 lat (usec): min=2599, max=88879, avg=20490.94, stdev=13294.62 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 6194], 5.00th=[ 7373], 10.00th=[11207], 20.00th=[13042], 00:13:51.574 | 30.00th=[13435], 40.00th=[13960], 50.00th=[15008], 60.00th=[16057], 00:13:51.574 | 70.00th=[19792], 80.00th=[29754], 90.00th=[35914], 95.00th=[41157], 00:13:51.574 | 99.00th=[81265], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:13:51.574 | 99.99th=[85459] 00:13:51.574 bw ( KiB/s): min=13944, max=14728, per=23.01%, avg=14336.00, stdev=554.37, samples=2 00:13:51.574 iops : min= 3486, max= 3682, avg=3584.00, stdev=138.59, samples=2 00:13:51.574 lat (msec) : 4=0.04%, 10=6.25%, 20=72.98%, 50=18.84%, 100=1.88% 00:13:51.574 cpu : usr=2.29%, sys=6.98%, ctx=259, majf=0, minf=9 00:13:51.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:51.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.574 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.574 job1: (groupid=0, jobs=1): err= 0: pid=3377912: Mon Jul 15 12:53:09 2024 00:13:51.574 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:13:51.574 slat (usec): min=2, max=16869, avg=109.40, stdev=728.99 00:13:51.574 clat (usec): min=7052, max=54077, avg=14287.26, stdev=7944.92 00:13:51.574 lat (usec): min=7627, max=54095, avg=14396.66, stdev=7988.56 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 8160], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:13:51.574 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600], 00:13:51.574 | 70.00th=[12911], 80.00th=[15401], 90.00th=[24773], 95.00th=[33817], 00:13:51.574 | 99.00th=[51119], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:13:51.574 | 99.99th=[54264] 00:13:51.574 write: IOPS=4363, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1003msec); 0 zone resets 00:13:51.574 slat (usec): min=4, max=23495, avg=117.32, stdev=836.92 00:13:51.574 clat (usec): min=2149, max=65493, avg=15031.98, stdev=11161.39 00:13:51.574 lat (usec): min=2783, max=65546, avg=15149.30, stdev=11242.45 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 5669], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10159], 00:13:51.574 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:13:51.574 | 70.00th=[11600], 80.00th=[14877], 90.00th=[31851], 95.00th=[47449], 00:13:51.574 | 99.00th=[55313], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:13:51.574 | 99.99th=[65274] 00:13:51.574 bw ( KiB/s): min=12752, max=21248, per=27.28%, avg=17000.00, stdev=6007.58, samples=2 00:13:51.574 iops : min= 3188, max= 5312, avg=4250.00, stdev=1501.89, samples=2 00:13:51.574 lat (msec) : 4=0.39%, 10=13.17%, 20=71.98%, 50=12.10%, 100=2.36% 00:13:51.574 cpu : usr=4.69%, sys=7.49%, ctx=349, majf=0, minf=17 00:13:51.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:51.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.574 issued rwts: total=4096,4377,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.574 job2: (groupid=0, jobs=1): err= 0: pid=3377913: Mon Jul 15 12:53:09 2024 00:13:51.574 read: IOPS=4669, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1004msec) 00:13:51.574 slat (usec): min=2, max=10388, avg=96.15, stdev=633.11 00:13:51.574 clat (usec): min=1063, max=65239, avg=12912.70, stdev=3700.57 00:13:51.574 lat (usec): min=2356, max=65860, avg=13008.85, stdev=3745.59 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 4113], 5.00th=[ 8029], 10.00th=[ 9765], 20.00th=[11076], 00:13:51.574 | 30.00th=[11600], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:13:51.574 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16319], 95.00th=[19006], 00:13:51.574 | 99.00th=[23987], 99.50th=[26608], 99.90th=[59507], 99.95th=[59507], 00:13:51.574 | 99.99th=[65274] 00:13:51.574 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:13:51.574 slat (usec): min=3, max=14249, avg=95.90, stdev=620.66 00:13:51.574 clat (usec): min=890, max=33119, avg=13039.10, stdev=4450.25 00:13:51.574 lat (usec): min=904, max=33132, avg=13135.00, stdev=4498.34 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 4555], 5.00th=[ 7046], 10.00th=[ 8717], 20.00th=[10683], 00:13:51.574 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:13:51.574 | 70.00th=[12911], 80.00th=[13829], 90.00th=[19006], 95.00th=[25035], 00:13:51.574 | 99.00th=[26608], 99.50th=[28443], 99.90th=[28967], 99.95th=[28967], 00:13:51.574 | 99.99th=[33162] 00:13:51.574 bw ( KiB/s): min=20112, max=20464, per=32.56%, avg=20288.00, stdev=248.90, samples=2 00:13:51.574 iops : min= 5028, max= 5116, avg=5072.00, stdev=62.23, samples=2 00:13:51.574 lat (usec) : 1000=0.06% 00:13:51.574 lat (msec) : 2=0.01%, 4=0.51%, 10=12.47%, 20=80.07%, 50=6.80% 00:13:51.574 lat (msec) : 100=0.08% 00:13:51.574 cpu : usr=3.79%, sys=5.28%, ctx=424, majf=0, minf=11 00:13:51.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:51.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.574 issued rwts: total=4688,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.574 job3: (groupid=0, jobs=1): err= 0: pid=3377914: Mon Jul 15 12:53:09 2024 00:13:51.574 read: IOPS=2095, BW=8382KiB/s (8584kB/s)(8416KiB/1004msec) 00:13:51.574 slat (usec): min=2, max=24146, avg=233.69, stdev=1527.56 00:13:51.574 clat (usec): min=1434, max=72411, avg=28218.13, stdev=16665.68 00:13:51.574 lat (usec): min=8538, max=72432, avg=28451.82, stdev=16732.69 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 8586], 5.00th=[11863], 10.00th=[13042], 20.00th=[14615], 00:13:51.574 | 30.00th=[15533], 40.00th=[16909], 50.00th=[19268], 60.00th=[26084], 00:13:51.574 | 70.00th=[40633], 80.00th=[47973], 90.00th=[53216], 95.00th=[56886], 00:13:51.574 | 99.00th=[66323], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:13:51.574 | 99.99th=[72877] 00:13:51.574 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:13:51.574 slat (usec): min=4, max=13336, avg=191.51, stdev=1002.28 00:13:51.574 clat (usec): min=5425, max=72429, avg=26513.31, stdev=13254.65 00:13:51.574 lat (usec): min=5445, max=72469, avg=26704.82, stdev=13339.51 00:13:51.574 clat percentiles (usec): 00:13:51.574 | 1.00th=[ 6915], 5.00th=[10290], 10.00th=[10683], 20.00th=[12780], 00:13:51.574 | 30.00th=[15270], 40.00th=[19792], 50.00th=[25035], 60.00th=[31065], 00:13:51.574 | 70.00th=[32900], 80.00th=[39584], 90.00th=[45351], 95.00th=[50594], 00:13:51.574 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[72877], 00:13:51.574 | 99.99th=[72877] 00:13:51.574 bw ( KiB/s): min= 7952, max=11952, per=15.97%, avg=9952.00, stdev=2828.43, samples=2 00:13:51.574 iops : min= 1988, max= 2988, avg=2488.00, stdev=707.11, samples=2 00:13:51.574 lat (msec) : 2=0.02%, 10=3.13%, 20=43.67%, 50=42.54%, 100=10.63% 00:13:51.574 cpu : usr=1.69%, sys=4.69%, ctx=251, majf=0, minf=13 00:13:51.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:13:51.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:51.574 issued rwts: total=2104,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:51.574 00:13:51.574 Run status group 0 (all jobs): 00:13:51.574 READ: bw=55.3MiB/s (58.0MB/s), 8382KiB/s-18.2MiB/s (8584kB/s-19.1MB/s), io=55.5MiB (58.2MB), run=1003-1004msec 00:13:51.574 WRITE: bw=60.9MiB/s (63.8MB/s), 9.96MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=61.1MiB (64.1MB), run=1003-1004msec 00:13:51.574 00:13:51.574 Disk stats (read/write): 00:13:51.574 nvme0n1: ios=2592/3072, merge=0/0, ticks=21723/30821, in_queue=52544, util=98.20% 00:13:51.574 nvme0n2: ios=3316/3584, merge=0/0, ticks=16145/18462, in_queue=34607, util=98.17% 00:13:51.574 nvme0n3: ios=4144/4155, merge=0/0, ticks=38560/37396, in_queue=75956, util=98.12% 00:13:51.574 nvme0n4: ios=2054/2048, merge=0/0, ticks=32442/43615, in_queue=76057, util=98.53% 00:13:51.574 12:53:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:51.574 [global] 00:13:51.574 thread=1 00:13:51.574 invalidate=1 00:13:51.574 rw=randwrite 00:13:51.574 time_based=1 00:13:51.574 runtime=1 00:13:51.574 ioengine=libaio 00:13:51.574 direct=1 00:13:51.574 bs=4096 00:13:51.574 iodepth=128 00:13:51.574 norandommap=0 00:13:51.574 numjobs=1 00:13:51.574 00:13:51.574 verify_dump=1 00:13:51.574 verify_backlog=512 00:13:51.574 verify_state_save=0 00:13:51.574 do_verify=1 00:13:51.574 verify=crc32c-intel 00:13:51.574 [job0] 00:13:51.574 filename=/dev/nvme0n1 00:13:51.574 [job1] 00:13:51.574 filename=/dev/nvme0n2 00:13:51.574 [job2] 00:13:51.574 filename=/dev/nvme0n3 00:13:51.574 [job3] 00:13:51.574 filename=/dev/nvme0n4 00:13:51.574 Could not set queue depth (nvme0n1) 00:13:51.574 Could not set queue depth (nvme0n2) 00:13:51.574 Could not set queue depth (nvme0n3) 00:13:51.574 Could not set queue depth (nvme0n4) 00:13:51.574 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.574 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.574 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.574 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.574 fio-3.35 00:13:51.574 Starting 4 threads 00:13:52.966 00:13:52.966 job0: (groupid=0, jobs=1): err= 0: pid=3378144: Mon Jul 15 12:53:10 2024 00:13:52.966 read: IOPS=2027, BW=8111KiB/s (8306kB/s)(8192KiB/1010msec) 00:13:52.966 slat (usec): min=3, max=11662, avg=216.36, stdev=1030.20 00:13:52.966 clat (usec): min=13310, max=44346, avg=27677.59, stdev=6123.32 00:13:52.966 lat (usec): min=16105, max=44355, avg=27893.95, stdev=6089.01 00:13:52.966 clat percentiles (usec): 00:13:52.966 | 1.00th=[16188], 5.00th=[19530], 10.00th=[21103], 20.00th=[22938], 00:13:52.966 | 30.00th=[23987], 40.00th=[24249], 50.00th=[25297], 60.00th=[27657], 00:13:52.966 | 70.00th=[31065], 80.00th=[34341], 90.00th=[37487], 95.00th=[38536], 00:13:52.966 | 99.00th=[40109], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:13:52.966 | 99.99th=[44303] 00:13:52.966 write: IOPS=2217, BW=8871KiB/s (9084kB/s)(8960KiB/1010msec); 0 zone resets 00:13:52.966 slat (usec): min=5, max=29674, avg=240.77, stdev=1650.04 00:13:52.966 clat (usec): min=6618, max=83929, avg=31157.61, stdev=17137.77 00:13:52.966 lat (usec): min=13504, max=83956, avg=31398.38, stdev=17204.81 00:13:52.966 clat percentiles (usec): 00:13:52.966 | 1.00th=[13566], 5.00th=[15139], 10.00th=[15270], 20.00th=[15533], 00:13:52.966 | 30.00th=[16909], 40.00th=[19530], 50.00th=[24249], 60.00th=[30016], 00:13:52.966 | 70.00th=[41157], 80.00th=[50070], 90.00th=[59507], 95.00th=[63701], 00:13:52.966 | 99.00th=[68682], 99.50th=[70779], 99.90th=[84411], 99.95th=[84411], 00:13:52.966 | 99.99th=[84411] 00:13:52.966 bw ( KiB/s): min= 7880, max= 9016, per=14.45%, avg=8448.00, stdev=803.27, samples=2 00:13:52.966 iops : min= 1970, max= 2254, avg=2112.00, stdev=200.82, samples=2 00:13:52.966 lat (msec) : 10=0.02%, 20=24.91%, 50=64.04%, 100=11.03% 00:13:52.966 cpu : usr=2.38%, sys=4.06%, ctx=180, majf=0, minf=15 00:13:52.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.966 issued rwts: total=2048,2240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.966 job1: (groupid=0, jobs=1): err= 0: pid=3378145: Mon Jul 15 12:53:10 2024 00:13:52.966 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:13:52.966 slat (usec): min=3, max=22646, avg=182.06, stdev=1110.00 00:13:52.966 clat (msec): min=7, max=104, avg=22.87, stdev=17.64 00:13:52.966 lat (msec): min=7, max=104, avg=23.05, stdev=17.79 00:13:52.966 clat percentiles (msec): 00:13:52.966 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 16], 00:13:52.966 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 20], 00:13:52.966 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 36], 95.00th=[ 80], 00:13:52.966 | 99.00th=[ 92], 99.50th=[ 95], 99.90th=[ 104], 99.95th=[ 105], 00:13:52.966 | 99.99th=[ 105] 00:13:52.966 write: IOPS=3412, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1002msec); 0 zone resets 00:13:52.966 slat (usec): min=4, max=16352, avg=120.69, stdev=657.13 00:13:52.966 clat (usec): min=399, max=75051, avg=16507.28, stdev=8312.93 00:13:52.966 lat (usec): min=3064, max=75068, avg=16627.97, stdev=8354.06 00:13:52.966 clat percentiles (usec): 00:13:52.966 | 1.00th=[ 3392], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[10028], 00:13:52.966 | 30.00th=[10683], 40.00th=[13698], 50.00th=[15008], 60.00th=[16450], 00:13:52.966 | 70.00th=[17957], 80.00th=[21365], 90.00th=[27132], 95.00th=[31065], 00:13:52.966 | 99.00th=[52691], 99.50th=[60031], 99.90th=[60031], 99.95th=[64750], 00:13:52.966 | 99.99th=[74974] 00:13:52.966 bw ( KiB/s): min=12288, max=14040, per=22.51%, avg=13164.00, stdev=1238.85, samples=2 00:13:52.966 iops : min= 3072, max= 3510, avg=3291.00, stdev=309.71, samples=2 00:13:52.966 lat (usec) : 500=0.02% 00:13:52.966 lat (msec) : 4=0.65%, 10=14.64%, 20=57.93%, 50=22.88%, 100=3.81% 00:13:52.966 lat (msec) : 250=0.09% 00:13:52.966 cpu : usr=3.00%, sys=6.49%, ctx=292, majf=0, minf=9 00:13:52.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.966 issued rwts: total=3072,3419,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.966 job2: (groupid=0, jobs=1): err= 0: pid=3378146: Mon Jul 15 12:53:10 2024 00:13:52.966 read: IOPS=4609, BW=18.0MiB/s (18.9MB/s)(18.9MiB/1051msec) 00:13:52.966 slat (usec): min=3, max=26335, avg=106.46, stdev=736.93 00:13:52.966 clat (usec): min=3980, max=75967, avg=14536.71, stdev=10692.00 00:13:52.966 lat (usec): min=3987, max=75971, avg=14643.16, stdev=10738.31 00:13:52.966 clat percentiles (usec): 00:13:52.966 | 1.00th=[ 5473], 5.00th=[ 6325], 10.00th=[10159], 20.00th=[11338], 00:13:52.966 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:13:52.966 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[42730], 00:13:52.966 | 99.00th=[70779], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:13:52.966 | 99.99th=[76022] 00:13:52.966 write: IOPS=4871, BW=19.0MiB/s (20.0MB/s)(20.0MiB/1051msec); 0 zone resets 00:13:52.966 slat (usec): min=3, max=5361, avg=86.42, stdev=346.97 00:13:52.966 clat (usec): min=3485, max=48102, avg=12221.98, stdev=4270.46 00:13:52.966 lat (usec): min=3496, max=48652, avg=12308.40, stdev=4271.33 00:13:52.966 clat percentiles (usec): 00:13:52.966 | 1.00th=[ 5604], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10552], 00:13:52.966 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:13:52.966 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13566], 95.00th=[14222], 00:13:52.966 | 99.00th=[43779], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:13:52.966 | 99.99th=[47973] 00:13:52.966 bw ( KiB/s): min=20480, max=20480, per=35.02%, avg=20480.00, stdev= 0.00, samples=2 00:13:52.966 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:13:52.966 lat (msec) : 4=0.14%, 10=10.40%, 20=84.59%, 50=3.59%, 100=1.28% 00:13:52.966 cpu : usr=5.43%, sys=10.67%, ctx=695, majf=0, minf=17 00:13:52.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:52.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.966 issued rwts: total=4845,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.967 job3: (groupid=0, jobs=1): err= 0: pid=3378147: Mon Jul 15 12:53:10 2024 00:13:52.967 read: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec) 00:13:52.967 slat (usec): min=2, max=18015, avg=109.99, stdev=664.24 00:13:52.967 clat (usec): min=8088, max=54735, avg=14264.48, stdev=5009.03 00:13:52.967 lat (usec): min=8125, max=54869, avg=14374.47, stdev=5045.57 00:13:52.967 clat percentiles (usec): 00:13:52.967 | 1.00th=[10028], 5.00th=[10945], 10.00th=[12256], 20.00th=[12518], 00:13:52.967 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:13:52.967 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16581], 95.00th=[20841], 00:13:52.967 | 99.00th=[43254], 99.50th=[49546], 99.90th=[54789], 99.95th=[54789], 00:13:52.967 | 99.99th=[54789] 00:13:52.967 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(17.9MiB/1010msec); 0 zone resets 00:13:52.967 slat (usec): min=3, max=15552, avg=107.28, stdev=678.66 00:13:52.967 clat (usec): min=333, max=54812, avg=15173.73, stdev=7919.95 00:13:52.967 lat (usec): min=827, max=54835, avg=15281.01, stdev=7972.15 00:13:52.967 clat percentiles (usec): 00:13:52.967 | 1.00th=[ 979], 5.00th=[ 7570], 10.00th=[ 9765], 20.00th=[11731], 00:13:52.967 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:13:52.967 | 70.00th=[14091], 80.00th=[15926], 90.00th=[30278], 95.00th=[34866], 00:13:52.967 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:13:52.967 | 99.99th=[54789] 00:13:52.967 bw ( KiB/s): min=15968, max=19688, per=30.49%, avg=17828.00, stdev=2630.44, samples=2 00:13:52.967 iops : min= 3992, max= 4922, avg=4457.00, stdev=657.61, samples=2 00:13:52.967 lat (usec) : 500=0.01%, 1000=0.69% 00:13:52.967 lat (msec) : 2=0.24%, 4=0.36%, 10=5.22%, 20=84.28%, 50=9.03% 00:13:52.967 lat (msec) : 100=0.17% 00:13:52.967 cpu : usr=3.27%, sys=7.73%, ctx=395, majf=0, minf=9 00:13:52.967 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:52.967 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.967 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:52.967 issued rwts: total=4096,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.967 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:52.967 00:13:52.967 Run status group 0 (all jobs): 00:13:52.967 READ: bw=52.3MiB/s (54.8MB/s), 8111KiB/s-18.0MiB/s (8306kB/s-18.9MB/s), io=54.9MiB (57.6MB), run=1002-1051msec 00:13:52.967 WRITE: bw=57.1MiB/s (59.9MB/s), 8871KiB/s-19.0MiB/s (9084kB/s-20.0MB/s), io=60.0MiB (62.9MB), run=1002-1051msec 00:13:52.967 00:13:52.967 Disk stats (read/write): 00:13:52.967 nvme0n1: ios=1563/2008, merge=0/0, ticks=12592/20072, in_queue=32664, util=100.00% 00:13:52.967 nvme0n2: ios=2583/2879, merge=0/0, ticks=20984/13403, in_queue=34387, util=96.85% 00:13:52.967 nvme0n3: ios=4150/4439, merge=0/0, ticks=17423/12415, in_queue=29838, util=99.48% 00:13:52.967 nvme0n4: ios=3617/3584, merge=0/0, ticks=30989/37028, in_queue=68017, util=96.74% 00:13:52.967 12:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:52.967 12:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3378285 00:13:52.967 12:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:52.967 12:53:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:52.967 [global] 00:13:52.967 thread=1 00:13:52.967 invalidate=1 00:13:52.967 rw=read 00:13:52.967 time_based=1 00:13:52.967 runtime=10 00:13:52.967 ioengine=libaio 00:13:52.967 direct=1 00:13:52.967 bs=4096 00:13:52.967 iodepth=1 00:13:52.967 norandommap=1 00:13:52.967 numjobs=1 00:13:52.967 00:13:52.967 [job0] 00:13:52.967 filename=/dev/nvme0n1 00:13:52.967 [job1] 00:13:52.967 filename=/dev/nvme0n2 00:13:52.967 [job2] 00:13:52.967 filename=/dev/nvme0n3 00:13:52.967 [job3] 00:13:52.967 filename=/dev/nvme0n4 00:13:52.967 Could not set queue depth (nvme0n1) 00:13:52.967 Could not set queue depth (nvme0n2) 00:13:52.967 Could not set queue depth (nvme0n3) 00:13:52.967 Could not set queue depth (nvme0n4) 00:13:52.967 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.967 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.967 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.967 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:52.967 fio-3.35 00:13:52.967 Starting 4 threads 00:13:56.243 12:53:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:56.243 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:56.243 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=946176, buflen=4096 00:13:56.243 fio: pid=3378495, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.243 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.243 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:56.243 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=27410432, buflen=4096 00:13:56.243 fio: pid=3378494, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.550 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=42483712, buflen=4096 00:13:56.550 fio: pid=3378492, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:56.550 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.550 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:56.866 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:56.866 12:53:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:56.866 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=33374208, buflen=4096 00:13:56.866 fio: pid=3378493, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:56.866 00:13:56.866 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3378492: Mon Jul 15 12:53:15 2024 00:13:56.866 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(40.5MiB/3441msec) 00:13:56.866 slat (usec): min=4, max=26749, avg=12.67, stdev=276.60 00:13:56.866 clat (usec): min=180, max=41982, avg=314.70, stdev=1349.60 00:13:56.866 lat (usec): min=187, max=41993, avg=327.37, stdev=1377.72 00:13:56.866 clat percentiles (usec): 00:13:56.866 | 1.00th=[ 196], 5.00th=[ 217], 10.00th=[ 229], 20.00th=[ 239], 00:13:56.866 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:13:56.866 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 363], 00:13:56.866 | 99.00th=[ 490], 99.50th=[ 523], 99.90th=[40633], 99.95th=[41157], 00:13:56.866 | 99.99th=[41681] 00:13:56.866 bw ( KiB/s): min= 8376, max=15432, per=43.08%, avg=11780.00, stdev=2478.21, samples=6 00:13:56.866 iops : min= 2094, max= 3858, avg=2945.00, stdev=619.55, samples=6 00:13:56.866 lat (usec) : 250=41.55%, 500=57.64%, 750=0.58%, 1000=0.07% 00:13:56.866 lat (msec) : 2=0.02%, 4=0.02%, 50=0.12% 00:13:56.866 cpu : usr=1.22%, sys=3.52%, ctx=10377, majf=0, minf=1 00:13:56.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.866 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.866 issued rwts: total=10373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.866 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3378493: Mon Jul 15 12:53:15 2024 00:13:56.866 read: IOPS=2189, BW=8757KiB/s (8967kB/s)(31.8MiB/3722msec) 00:13:56.866 slat (usec): min=5, max=15607, avg=16.74, stdev=280.76 00:13:56.866 clat (usec): min=191, max=41882, avg=437.49, stdev=2465.98 00:13:56.866 lat (usec): min=199, max=56716, avg=453.28, stdev=2512.49 00:13:56.866 clat percentiles (usec): 00:13:56.866 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 239], 00:13:56.866 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 285], 00:13:56.866 | 70.00th=[ 297], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 445], 00:13:56.866 | 99.00th=[ 553], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:13:56.866 | 99.99th=[41681] 00:13:56.866 bw ( KiB/s): min= 2440, max=13368, per=33.56%, avg=9177.29, stdev=4748.07, samples=7 00:13:56.866 iops : min= 610, max= 3342, avg=2294.29, stdev=1187.06, samples=7 00:13:56.866 lat (usec) : 250=30.46%, 500=67.00%, 750=2.07%, 1000=0.06% 00:13:56.866 lat (msec) : 2=0.01%, 4=0.01%, 50=0.37% 00:13:56.867 cpu : usr=1.29%, sys=3.22%, ctx=8156, majf=0, minf=1 00:13:56.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 issued rwts: total=8149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.867 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3378494: Mon Jul 15 12:53:15 2024 00:13:56.867 read: IOPS=2110, BW=8442KiB/s (8644kB/s)(26.1MiB/3171msec) 00:13:56.867 slat (nsec): min=4707, max=59947, avg=8754.26, stdev=3608.03 00:13:56.867 clat (usec): min=196, max=42037, avg=459.69, stdev=2781.21 00:13:56.867 lat (usec): min=203, max=42056, avg=468.44, stdev=2782.02 00:13:56.867 clat percentiles (usec): 00:13:56.867 | 1.00th=[ 210], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 229], 00:13:56.867 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:13:56.867 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 408], 00:13:56.867 | 99.00th=[ 586], 99.50th=[ 2737], 99.90th=[41157], 99.95th=[41157], 00:13:56.867 | 99.99th=[42206] 00:13:56.867 bw ( KiB/s): min= 104, max=16240, per=30.07%, avg=8221.33, stdev=5946.82, samples=6 00:13:56.867 iops : min= 26, max= 4060, avg=2055.33, stdev=1486.71, samples=6 00:13:56.867 lat (usec) : 250=54.30%, 500=43.76%, 750=1.39%, 1000=0.01% 00:13:56.867 lat (msec) : 2=0.01%, 4=0.01%, 20=0.01%, 50=0.48% 00:13:56.867 cpu : usr=0.85%, sys=2.24%, ctx=6693, majf=0, minf=1 00:13:56.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 issued rwts: total=6693,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.867 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3378495: Mon Jul 15 12:53:15 2024 00:13:56.867 read: IOPS=79, BW=316KiB/s (323kB/s)(924KiB/2928msec) 00:13:56.867 slat (nsec): min=6828, max=44913, avg=12370.03, stdev=6478.32 00:13:56.867 clat (usec): min=213, max=42084, avg=12613.68, stdev=18716.98 00:13:56.867 lat (usec): min=221, max=42103, avg=12626.04, stdev=18721.23 00:13:56.867 clat percentiles (usec): 00:13:56.867 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 247], 20.00th=[ 265], 00:13:56.867 | 30.00th=[ 281], 40.00th=[ 310], 50.00th=[ 330], 60.00th=[ 375], 00:13:56.867 | 70.00th=[30802], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:56.867 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:56.867 | 99.99th=[42206] 00:13:56.867 bw ( KiB/s): min= 96, max= 1184, per=1.28%, avg=352.00, stdev=470.98, samples=5 00:13:56.867 iops : min= 24, max= 296, avg=88.00, stdev=117.75, samples=5 00:13:56.867 lat (usec) : 250=11.21%, 500=56.47%, 750=1.72% 00:13:56.867 lat (msec) : 50=30.17% 00:13:56.867 cpu : usr=0.00%, sys=0.17%, ctx=233, majf=0, minf=1 00:13:56.867 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:56.867 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.867 issued rwts: total=232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.867 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:56.867 00:13:56.867 Run status group 0 (all jobs): 00:13:56.867 READ: bw=26.7MiB/s (28.0MB/s), 316KiB/s-11.8MiB/s (323kB/s-12.3MB/s), io=99.4MiB (104MB), run=2928-3722msec 00:13:56.867 00:13:56.867 Disk stats (read/write): 00:13:56.867 nvme0n1: ios=10078/0, merge=0/0, ticks=3161/0, in_queue=3161, util=94.99% 00:13:56.867 nvme0n2: ios=8145/0, merge=0/0, ticks=3360/0, in_queue=3360, util=95.37% 00:13:56.867 nvme0n3: ios=6540/0, merge=0/0, ticks=2997/0, in_queue=2997, util=96.79% 00:13:56.867 nvme0n4: ios=275/0, merge=0/0, ticks=3756/0, in_queue=3756, util=99.02% 00:13:57.125 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.125 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:57.383 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.383 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:57.641 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.641 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:57.898 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.898 12:53:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:58.157 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:58.157 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 3378285 00:13:58.157 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:58.157 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:58.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:58.416 nvmf hotplug test: fio failed as expected 00:13:58.416 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:58.674 rmmod nvme_tcp 00:13:58.674 rmmod nvme_fabrics 00:13:58.674 rmmod nvme_keyring 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3376376 ']' 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3376376 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3376376 ']' 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3376376 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3376376 00:13:58.674 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:58.675 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:58.675 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3376376' 00:13:58.675 killing process with pid 3376376 00:13:58.675 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3376376 00:13:58.675 12:53:16 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3376376 00:13:58.932 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:58.932 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:58.932 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:58.932 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.932 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.933 12:53:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.933 12:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.933 12:53:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.468 12:53:19 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:01.468 00:14:01.468 real 0m23.392s 00:14:01.468 user 1m21.557s 00:14:01.468 sys 0m7.310s 00:14:01.468 12:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:01.468 12:53:19 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.468 ************************************ 00:14:01.468 END TEST nvmf_fio_target 00:14:01.468 ************************************ 00:14:01.468 12:53:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:01.468 12:53:19 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:01.468 12:53:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:01.468 12:53:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:01.468 12:53:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.468 ************************************ 00:14:01.468 START TEST nvmf_bdevio 00:14:01.468 ************************************ 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:01.468 * Looking for test storage... 00:14:01.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:01.468 12:53:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:03.369 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:03.370 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:03.370 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:03.370 Found net devices under 0000:84:00.0: cvl_0_0 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:03.370 Found net devices under 0000:84:00.1: cvl_0_1 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:03.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:14:03.370 00:14:03.370 --- 10.0.0.2 ping statistics --- 00:14:03.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.370 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:14:03.370 00:14:03.370 --- 10.0.0.1 ping statistics --- 00:14:03.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.370 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3381132 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3381132 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3381132 ']' 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:03.370 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.370 [2024-07-15 12:53:21.479194] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:14:03.370 [2024-07-15 12:53:21.479266] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.370 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.370 [2024-07-15 12:53:21.544304] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:03.628 [2024-07-15 12:53:21.647904] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.628 [2024-07-15 12:53:21.647954] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.628 [2024-07-15 12:53:21.647978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.628 [2024-07-15 12:53:21.647989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.628 [2024-07-15 12:53:21.647998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.628 [2024-07-15 12:53:21.648094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:03.628 [2024-07-15 12:53:21.648156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:03.628 [2024-07-15 12:53:21.648220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:03.628 [2024-07-15 12:53:21.648224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.628 [2024-07-15 12:53:21.799621] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.628 Malloc0 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.628 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.888 [2024-07-15 12:53:21.852873] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:03.888 { 00:14:03.888 "params": { 00:14:03.888 "name": "Nvme$subsystem", 00:14:03.888 "trtype": "$TEST_TRANSPORT", 00:14:03.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.888 "adrfam": "ipv4", 00:14:03.888 "trsvcid": "$NVMF_PORT", 00:14:03.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.888 "hdgst": ${hdgst:-false}, 00:14:03.888 "ddgst": ${ddgst:-false} 00:14:03.888 }, 00:14:03.888 "method": "bdev_nvme_attach_controller" 00:14:03.888 } 00:14:03.888 EOF 00:14:03.888 )") 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:03.888 12:53:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:03.888 "params": { 00:14:03.888 "name": "Nvme1", 00:14:03.888 "trtype": "tcp", 00:14:03.888 "traddr": "10.0.0.2", 00:14:03.888 "adrfam": "ipv4", 00:14:03.888 "trsvcid": "4420", 00:14:03.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:03.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:03.888 "hdgst": false, 00:14:03.888 "ddgst": false 00:14:03.888 }, 00:14:03.888 "method": "bdev_nvme_attach_controller" 00:14:03.888 }' 00:14:03.888 [2024-07-15 12:53:21.900795] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:14:03.888 [2024-07-15 12:53:21.900888] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381165 ] 00:14:03.888 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.888 [2024-07-15 12:53:21.963097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:03.888 [2024-07-15 12:53:22.080208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.888 [2024-07-15 12:53:22.080259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:03.888 [2024-07-15 12:53:22.080263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.453 I/O targets: 00:14:04.453 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:04.453 00:14:04.453 00:14:04.453 CUnit - A unit testing framework for C - Version 2.1-3 00:14:04.453 http://cunit.sourceforge.net/ 00:14:04.453 00:14:04.453 00:14:04.453 Suite: bdevio tests on: Nvme1n1 00:14:04.453 Test: blockdev write read block ...passed 00:14:04.453 Test: blockdev write zeroes read block ...passed 00:14:04.453 Test: blockdev write zeroes read no split ...passed 00:14:04.453 Test: blockdev write zeroes read split ...passed 00:14:04.453 Test: blockdev write zeroes read split partial ...passed 00:14:04.453 Test: blockdev reset ...[2024-07-15 12:53:22.583615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:04.453 [2024-07-15 12:53:22.583727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2128bd0 (9): Bad file descriptor 00:14:04.453 [2024-07-15 12:53:22.635213] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:04.453 passed 00:14:04.453 Test: blockdev write read 8 blocks ...passed 00:14:04.453 Test: blockdev write read size > 128k ...passed 00:14:04.453 Test: blockdev write read invalid size ...passed 00:14:04.711 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:04.711 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:04.711 Test: blockdev write read max offset ...passed 00:14:04.711 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:04.711 Test: blockdev writev readv 8 blocks ...passed 00:14:04.711 Test: blockdev writev readv 30 x 1block ...passed 00:14:04.711 Test: blockdev writev readv block ...passed 00:14:04.711 Test: blockdev writev readv size > 128k ...passed 00:14:04.711 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:04.711 Test: blockdev comparev and writev ...[2024-07-15 12:53:22.846249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.846286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.846311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.846328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.846684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.846709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.846731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.846757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.847113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.847137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.847158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.847174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.847505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.847529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:04.711 [2024-07-15 12:53:22.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:04.711 [2024-07-15 12:53:22.847567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:04.711 passed 00:14:04.970 Test: blockdev nvme passthru rw ...passed 00:14:04.970 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:53:22.930069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.970 [2024-07-15 12:53:22.930096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:04.970 [2024-07-15 12:53:22.930244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.970 [2024-07-15 12:53:22.930267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:04.970 [2024-07-15 12:53:22.930412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.970 [2024-07-15 12:53:22.930434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:04.970 [2024-07-15 12:53:22.930578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:04.970 [2024-07-15 12:53:22.930600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:04.970 passed 00:14:04.970 Test: blockdev nvme admin passthru ...passed 00:14:04.970 Test: blockdev copy ...passed 00:14:04.970 00:14:04.970 Run Summary: Type Total Ran Passed Failed Inactive 00:14:04.970 suites 1 1 n/a 0 0 00:14:04.970 tests 23 23 23 0 0 00:14:04.970 asserts 152 152 152 0 n/a 00:14:04.970 00:14:04.970 Elapsed time = 1.117 seconds 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:05.229 rmmod nvme_tcp 00:14:05.229 rmmod nvme_fabrics 00:14:05.229 rmmod nvme_keyring 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3381132 ']' 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3381132 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3381132 ']' 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3381132 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3381132 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3381132' 00:14:05.229 killing process with pid 3381132 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3381132 00:14:05.229 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3381132 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.488 12:53:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.024 12:53:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:08.024 00:14:08.024 real 0m6.550s 00:14:08.024 user 0m11.047s 00:14:08.024 sys 0m2.105s 00:14:08.024 12:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:08.024 12:53:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:08.024 ************************************ 00:14:08.024 END TEST nvmf_bdevio 00:14:08.024 ************************************ 00:14:08.024 12:53:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:08.024 12:53:25 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:08.024 12:53:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:08.024 12:53:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:08.024 12:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:08.024 ************************************ 00:14:08.024 START TEST nvmf_auth_target 00:14:08.024 ************************************ 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:08.024 * Looking for test storage... 00:14:08.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:08.024 12:53:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:09.921 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.921 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:09.922 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:09.922 Found net devices under 0000:84:00.0: cvl_0_0 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:09.922 Found net devices under 0000:84:00.1: cvl_0_1 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:14:09.922 00:14:09.922 --- 10.0.0.2 ping statistics --- 00:14:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.922 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:14:09.922 00:14:09.922 --- 10.0.0.1 ping statistics --- 00:14:09.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.922 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3383264 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3383264 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3383264 ']' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:09.922 12:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3383390 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=fdaa40e2e8e7f6d72bff03b99451e02156556ec8129892c2 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Zqw 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key fdaa40e2e8e7f6d72bff03b99451e02156556ec8129892c2 0 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 fdaa40e2e8e7f6d72bff03b99451e02156556ec8129892c2 0 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=fdaa40e2e8e7f6d72bff03b99451e02156556ec8129892c2 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:10.180 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Zqw 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Zqw 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Zqw 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f99be3963091b9367b0f806af50df330062efd228a852c3807b70707bd23d7a5 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sra 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f99be3963091b9367b0f806af50df330062efd228a852c3807b70707bd23d7a5 3 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f99be3963091b9367b0f806af50df330062efd228a852c3807b70707bd23d7a5 3 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f99be3963091b9367b0f806af50df330062efd228a852c3807b70707bd23d7a5 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sra 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sra 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.sra 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=10cf14aa0e6498bacc088531bd486da3 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.tWT 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 10cf14aa0e6498bacc088531bd486da3 1 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 10cf14aa0e6498bacc088531bd486da3 1 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=10cf14aa0e6498bacc088531bd486da3 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.438 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.tWT 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.tWT 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.tWT 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5bda07c9c610191b68a58954df33b55ddc2af95eed6c3265 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.8NW 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5bda07c9c610191b68a58954df33b55ddc2af95eed6c3265 2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5bda07c9c610191b68a58954df33b55ddc2af95eed6c3265 2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5bda07c9c610191b68a58954df33b55ddc2af95eed6c3265 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.8NW 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.8NW 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.8NW 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=36a218bb9be98907ef3245b4ef2cfd6aba66b433bbd25254 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.A0I 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 36a218bb9be98907ef3245b4ef2cfd6aba66b433bbd25254 2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 36a218bb9be98907ef3245b4ef2cfd6aba66b433bbd25254 2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=36a218bb9be98907ef3245b4ef2cfd6aba66b433bbd25254 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.A0I 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.A0I 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.A0I 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=313550ebf7dfd80ceb4a0beb095fb9e5 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Cot 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 313550ebf7dfd80ceb4a0beb095fb9e5 1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 313550ebf7dfd80ceb4a0beb095fb9e5 1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=313550ebf7dfd80ceb4a0beb095fb9e5 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:10.439 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Cot 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Cot 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Cot 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9c4b48935225f4b59be899a00c1d38426d1d895e8c396d62557628d8c7a89d01 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.jEg 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9c4b48935225f4b59be899a00c1d38426d1d895e8c396d62557628d8c7a89d01 3 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9c4b48935225f4b59be899a00c1d38426d1d895e8c396d62557628d8c7a89d01 3 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9c4b48935225f4b59be899a00c1d38426d1d895e8c396d62557628d8c7a89d01 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.jEg 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.jEg 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.jEg 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3383264 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3383264 ']' 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.696 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3383390 /var/tmp/host.sock 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3383390 ']' 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:10.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:10.954 12:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Zqw 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Zqw 00:14:11.211 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Zqw 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.sra ]] 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sra 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sra 00:14:11.468 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sra 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tWT 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tWT 00:14:11.726 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tWT 00:14:11.982 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.8NW ]] 00:14:11.982 12:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8NW 00:14:11.982 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.982 12:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.982 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.982 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8NW 00:14:11.982 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8NW 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.A0I 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.A0I 00:14:12.240 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.A0I 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Cot ]] 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cot 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cot 00:14:12.498 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cot 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.jEg 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.jEg 00:14:12.755 12:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.jEg 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.013 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.271 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.529 00:14:13.529 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:13.529 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.529 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:13.787 { 00:14:13.787 "cntlid": 1, 00:14:13.787 "qid": 0, 00:14:13.787 "state": "enabled", 00:14:13.787 "thread": "nvmf_tgt_poll_group_000", 00:14:13.787 "listen_address": { 00:14:13.787 "trtype": "TCP", 00:14:13.787 "adrfam": "IPv4", 00:14:13.787 "traddr": "10.0.0.2", 00:14:13.787 "trsvcid": "4420" 00:14:13.787 }, 00:14:13.787 "peer_address": { 00:14:13.787 "trtype": "TCP", 00:14:13.787 "adrfam": "IPv4", 00:14:13.787 "traddr": "10.0.0.1", 00:14:13.787 "trsvcid": "52428" 00:14:13.787 }, 00:14:13.787 "auth": { 00:14:13.787 "state": "completed", 00:14:13.787 "digest": "sha256", 00:14:13.787 "dhgroup": "null" 00:14:13.787 } 00:14:13.787 } 00:14:13.787 ]' 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.787 12:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.046 12:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.979 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.237 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.802 00:14:15.802 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.802 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.803 12:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.803 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.060 { 00:14:16.060 "cntlid": 3, 00:14:16.060 "qid": 0, 00:14:16.060 "state": "enabled", 00:14:16.060 "thread": "nvmf_tgt_poll_group_000", 00:14:16.060 "listen_address": { 00:14:16.060 "trtype": "TCP", 00:14:16.060 "adrfam": "IPv4", 00:14:16.060 "traddr": "10.0.0.2", 00:14:16.060 "trsvcid": "4420" 00:14:16.060 }, 00:14:16.060 "peer_address": { 00:14:16.060 "trtype": "TCP", 00:14:16.060 "adrfam": "IPv4", 00:14:16.060 "traddr": "10.0.0.1", 00:14:16.060 "trsvcid": "52452" 00:14:16.060 }, 00:14:16.060 "auth": { 00:14:16.060 "state": "completed", 00:14:16.060 "digest": "sha256", 00:14:16.060 "dhgroup": "null" 00:14:16.060 } 00:14:16.060 } 00:14:16.060 ]' 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.060 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.317 12:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.253 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.511 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.769 00:14:17.769 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.769 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.769 12:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.027 { 00:14:18.027 "cntlid": 5, 00:14:18.027 "qid": 0, 00:14:18.027 "state": "enabled", 00:14:18.027 "thread": "nvmf_tgt_poll_group_000", 00:14:18.027 "listen_address": { 00:14:18.027 "trtype": "TCP", 00:14:18.027 "adrfam": "IPv4", 00:14:18.027 "traddr": "10.0.0.2", 00:14:18.027 "trsvcid": "4420" 00:14:18.027 }, 00:14:18.027 "peer_address": { 00:14:18.027 "trtype": "TCP", 00:14:18.027 "adrfam": "IPv4", 00:14:18.027 "traddr": "10.0.0.1", 00:14:18.027 "trsvcid": "52478" 00:14:18.027 }, 00:14:18.027 "auth": { 00:14:18.027 "state": "completed", 00:14:18.027 "digest": "sha256", 00:14:18.027 "dhgroup": "null" 00:14:18.027 } 00:14:18.027 } 00:14:18.027 ]' 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.027 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.028 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.286 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.286 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.286 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.544 12:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.480 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.047 00:14:20.047 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.047 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.047 12:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.047 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.047 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.047 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.047 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.047 12:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.305 { 00:14:20.305 "cntlid": 7, 00:14:20.305 "qid": 0, 00:14:20.305 "state": "enabled", 00:14:20.305 "thread": "nvmf_tgt_poll_group_000", 00:14:20.305 "listen_address": { 00:14:20.305 "trtype": "TCP", 00:14:20.305 "adrfam": "IPv4", 00:14:20.305 "traddr": "10.0.0.2", 00:14:20.305 "trsvcid": "4420" 00:14:20.305 }, 00:14:20.305 "peer_address": { 00:14:20.305 "trtype": "TCP", 00:14:20.305 "adrfam": "IPv4", 00:14:20.305 "traddr": "10.0.0.1", 00:14:20.305 "trsvcid": "52502" 00:14:20.305 }, 00:14:20.305 "auth": { 00:14:20.305 "state": "completed", 00:14:20.305 "digest": "sha256", 00:14:20.305 "dhgroup": "null" 00:14:20.305 } 00:14:20.305 } 00:14:20.305 ]' 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.305 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.563 12:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.497 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.754 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:21.754 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:21.755 12:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.012 00:14:22.012 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.012 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.012 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.270 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.270 { 00:14:22.270 "cntlid": 9, 00:14:22.270 "qid": 0, 00:14:22.270 "state": "enabled", 00:14:22.270 "thread": "nvmf_tgt_poll_group_000", 00:14:22.270 "listen_address": { 00:14:22.270 "trtype": "TCP", 00:14:22.270 "adrfam": "IPv4", 00:14:22.270 "traddr": "10.0.0.2", 00:14:22.270 "trsvcid": "4420" 00:14:22.270 }, 00:14:22.271 "peer_address": { 00:14:22.271 "trtype": "TCP", 00:14:22.271 "adrfam": "IPv4", 00:14:22.271 "traddr": "10.0.0.1", 00:14:22.271 "trsvcid": "52528" 00:14:22.271 }, 00:14:22.271 "auth": { 00:14:22.271 "state": "completed", 00:14:22.271 "digest": "sha256", 00:14:22.271 "dhgroup": "ffdhe2048" 00:14:22.271 } 00:14:22.271 } 00:14:22.271 ]' 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.271 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.530 12:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.466 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.467 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:23.725 12:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.291 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.291 { 00:14:24.291 "cntlid": 11, 00:14:24.291 "qid": 0, 00:14:24.291 "state": "enabled", 00:14:24.291 "thread": "nvmf_tgt_poll_group_000", 00:14:24.291 "listen_address": { 00:14:24.291 "trtype": "TCP", 00:14:24.291 "adrfam": "IPv4", 00:14:24.291 "traddr": "10.0.0.2", 00:14:24.291 "trsvcid": "4420" 00:14:24.291 }, 00:14:24.291 "peer_address": { 00:14:24.291 "trtype": "TCP", 00:14:24.291 "adrfam": "IPv4", 00:14:24.291 "traddr": "10.0.0.1", 00:14:24.291 "trsvcid": "44060" 00:14:24.291 }, 00:14:24.291 "auth": { 00:14:24.291 "state": "completed", 00:14:24.291 "digest": "sha256", 00:14:24.291 "dhgroup": "ffdhe2048" 00:14:24.291 } 00:14:24.291 } 00:14:24.291 ]' 00:14:24.291 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.549 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.837 12:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.793 12:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.051 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.308 00:14:26.308 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.308 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.308 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.566 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.566 { 00:14:26.566 "cntlid": 13, 00:14:26.566 "qid": 0, 00:14:26.566 "state": "enabled", 00:14:26.566 "thread": "nvmf_tgt_poll_group_000", 00:14:26.566 "listen_address": { 00:14:26.566 "trtype": "TCP", 00:14:26.566 "adrfam": "IPv4", 00:14:26.566 "traddr": "10.0.0.2", 00:14:26.566 "trsvcid": "4420" 00:14:26.566 }, 00:14:26.566 "peer_address": { 00:14:26.566 "trtype": "TCP", 00:14:26.566 "adrfam": "IPv4", 00:14:26.566 "traddr": "10.0.0.1", 00:14:26.566 "trsvcid": "44082" 00:14:26.566 }, 00:14:26.566 "auth": { 00:14:26.567 "state": "completed", 00:14:26.567 "digest": "sha256", 00:14:26.567 "dhgroup": "ffdhe2048" 00:14:26.567 } 00:14:26.567 } 00:14:26.567 ]' 00:14:26.567 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.567 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.567 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.567 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.567 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.825 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.825 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.825 12:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.083 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:14:28.015 12:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.015 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.272 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:28.272 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.273 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:28.528 00:14:28.528 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.528 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.528 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.785 { 00:14:28.785 "cntlid": 15, 00:14:28.785 "qid": 0, 00:14:28.785 "state": "enabled", 00:14:28.785 "thread": "nvmf_tgt_poll_group_000", 00:14:28.785 "listen_address": { 00:14:28.785 "trtype": "TCP", 00:14:28.785 "adrfam": "IPv4", 00:14:28.785 "traddr": "10.0.0.2", 00:14:28.785 "trsvcid": "4420" 00:14:28.785 }, 00:14:28.785 "peer_address": { 00:14:28.785 "trtype": "TCP", 00:14:28.785 "adrfam": "IPv4", 00:14:28.785 "traddr": "10.0.0.1", 00:14:28.785 "trsvcid": "44094" 00:14:28.785 }, 00:14:28.785 "auth": { 00:14:28.785 "state": "completed", 00:14:28.785 "digest": "sha256", 00:14:28.785 "dhgroup": "ffdhe2048" 00:14:28.785 } 00:14:28.785 } 00:14:28.785 ]' 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.785 12:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.042 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.042 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.042 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.299 12:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.231 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.488 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.745 00:14:30.745 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.745 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.745 12:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.002 { 00:14:31.002 "cntlid": 17, 00:14:31.002 "qid": 0, 00:14:31.002 "state": "enabled", 00:14:31.002 "thread": "nvmf_tgt_poll_group_000", 00:14:31.002 "listen_address": { 00:14:31.002 "trtype": "TCP", 00:14:31.002 "adrfam": "IPv4", 00:14:31.002 "traddr": "10.0.0.2", 00:14:31.002 "trsvcid": "4420" 00:14:31.002 }, 00:14:31.002 "peer_address": { 00:14:31.002 "trtype": "TCP", 00:14:31.002 "adrfam": "IPv4", 00:14:31.002 "traddr": "10.0.0.1", 00:14:31.002 "trsvcid": "44124" 00:14:31.002 }, 00:14:31.002 "auth": { 00:14:31.002 "state": "completed", 00:14:31.002 "digest": "sha256", 00:14:31.002 "dhgroup": "ffdhe3072" 00:14:31.002 } 00:14:31.002 } 00:14:31.002 ]' 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.002 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.259 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.259 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.259 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.259 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.259 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.516 12:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:32.451 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.452 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.709 12:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.968 00:14:32.968 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.968 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.968 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:33.226 { 00:14:33.226 "cntlid": 19, 00:14:33.226 "qid": 0, 00:14:33.226 "state": "enabled", 00:14:33.226 "thread": "nvmf_tgt_poll_group_000", 00:14:33.226 "listen_address": { 00:14:33.226 "trtype": "TCP", 00:14:33.226 "adrfam": "IPv4", 00:14:33.226 "traddr": "10.0.0.2", 00:14:33.226 "trsvcid": "4420" 00:14:33.226 }, 00:14:33.226 "peer_address": { 00:14:33.226 "trtype": "TCP", 00:14:33.226 "adrfam": "IPv4", 00:14:33.226 "traddr": "10.0.0.1", 00:14:33.226 "trsvcid": "36874" 00:14:33.226 }, 00:14:33.226 "auth": { 00:14:33.226 "state": "completed", 00:14:33.226 "digest": "sha256", 00:14:33.226 "dhgroup": "ffdhe3072" 00:14:33.226 } 00:14:33.226 } 00:14:33.226 ]' 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.226 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.483 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.483 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.483 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.741 12:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.676 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.934 12:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.191 00:14:35.191 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.191 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.191 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.447 { 00:14:35.447 "cntlid": 21, 00:14:35.447 "qid": 0, 00:14:35.447 "state": "enabled", 00:14:35.447 "thread": "nvmf_tgt_poll_group_000", 00:14:35.447 "listen_address": { 00:14:35.447 "trtype": "TCP", 00:14:35.447 "adrfam": "IPv4", 00:14:35.447 "traddr": "10.0.0.2", 00:14:35.447 "trsvcid": "4420" 00:14:35.447 }, 00:14:35.447 "peer_address": { 00:14:35.447 "trtype": "TCP", 00:14:35.447 "adrfam": "IPv4", 00:14:35.447 "traddr": "10.0.0.1", 00:14:35.447 "trsvcid": "36898" 00:14:35.447 }, 00:14:35.447 "auth": { 00:14:35.447 "state": "completed", 00:14:35.447 "digest": "sha256", 00:14:35.447 "dhgroup": "ffdhe3072" 00:14:35.447 } 00:14:35.447 } 00:14:35.447 ]' 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.447 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.705 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.705 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.705 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.963 12:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.900 12:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.900 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.159 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.159 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.159 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.417 00:14:37.417 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.417 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.417 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.675 { 00:14:37.675 "cntlid": 23, 00:14:37.675 "qid": 0, 00:14:37.675 "state": "enabled", 00:14:37.675 "thread": "nvmf_tgt_poll_group_000", 00:14:37.675 "listen_address": { 00:14:37.675 "trtype": "TCP", 00:14:37.675 "adrfam": "IPv4", 00:14:37.675 "traddr": "10.0.0.2", 00:14:37.675 "trsvcid": "4420" 00:14:37.675 }, 00:14:37.675 "peer_address": { 00:14:37.675 "trtype": "TCP", 00:14:37.675 "adrfam": "IPv4", 00:14:37.675 "traddr": "10.0.0.1", 00:14:37.675 "trsvcid": "36938" 00:14:37.675 }, 00:14:37.675 "auth": { 00:14:37.675 "state": "completed", 00:14:37.675 "digest": "sha256", 00:14:37.675 "dhgroup": "ffdhe3072" 00:14:37.675 } 00:14:37.675 } 00:14:37.675 ]' 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.675 12:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.933 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:14:38.869 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.869 12:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:38.869 12:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.869 12:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.869 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.869 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.869 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.869 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.869 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.127 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.698 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.698 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.956 { 00:14:39.956 "cntlid": 25, 00:14:39.956 "qid": 0, 00:14:39.956 "state": "enabled", 00:14:39.956 "thread": "nvmf_tgt_poll_group_000", 00:14:39.956 "listen_address": { 00:14:39.956 "trtype": "TCP", 00:14:39.956 "adrfam": "IPv4", 00:14:39.956 "traddr": "10.0.0.2", 00:14:39.956 "trsvcid": "4420" 00:14:39.956 }, 00:14:39.956 "peer_address": { 00:14:39.956 "trtype": "TCP", 00:14:39.956 "adrfam": "IPv4", 00:14:39.956 "traddr": "10.0.0.1", 00:14:39.956 "trsvcid": "36958" 00:14:39.956 }, 00:14:39.956 "auth": { 00:14:39.956 "state": "completed", 00:14:39.956 "digest": "sha256", 00:14:39.956 "dhgroup": "ffdhe4096" 00:14:39.956 } 00:14:39.956 } 00:14:39.956 ]' 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.956 12:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.956 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.956 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.956 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.214 12:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.150 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.408 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.976 00:14:41.976 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.976 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.976 12:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.976 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.976 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.976 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.976 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.234 { 00:14:42.234 "cntlid": 27, 00:14:42.234 "qid": 0, 00:14:42.234 "state": "enabled", 00:14:42.234 "thread": "nvmf_tgt_poll_group_000", 00:14:42.234 "listen_address": { 00:14:42.234 "trtype": "TCP", 00:14:42.234 "adrfam": "IPv4", 00:14:42.234 "traddr": "10.0.0.2", 00:14:42.234 "trsvcid": "4420" 00:14:42.234 }, 00:14:42.234 "peer_address": { 00:14:42.234 "trtype": "TCP", 00:14:42.234 "adrfam": "IPv4", 00:14:42.234 "traddr": "10.0.0.1", 00:14:42.234 "trsvcid": "36982" 00:14:42.234 }, 00:14:42.234 "auth": { 00:14:42.234 "state": "completed", 00:14:42.234 "digest": "sha256", 00:14:42.234 "dhgroup": "ffdhe4096" 00:14:42.234 } 00:14:42.234 } 00:14:42.234 ]' 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.234 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.492 12:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.427 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.684 12:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.942 00:14:43.942 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.942 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.942 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.200 { 00:14:44.200 "cntlid": 29, 00:14:44.200 "qid": 0, 00:14:44.200 "state": "enabled", 00:14:44.200 "thread": "nvmf_tgt_poll_group_000", 00:14:44.200 "listen_address": { 00:14:44.200 "trtype": "TCP", 00:14:44.200 "adrfam": "IPv4", 00:14:44.200 "traddr": "10.0.0.2", 00:14:44.200 "trsvcid": "4420" 00:14:44.200 }, 00:14:44.200 "peer_address": { 00:14:44.200 "trtype": "TCP", 00:14:44.200 "adrfam": "IPv4", 00:14:44.200 "traddr": "10.0.0.1", 00:14:44.200 "trsvcid": "59924" 00:14:44.200 }, 00:14:44.200 "auth": { 00:14:44.200 "state": "completed", 00:14:44.200 "digest": "sha256", 00:14:44.200 "dhgroup": "ffdhe4096" 00:14:44.200 } 00:14:44.200 } 00:14:44.200 ]' 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.200 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.458 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.458 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.458 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.458 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.458 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.715 12:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.647 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.906 12:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.276 00:14:46.276 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.276 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.276 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.562 { 00:14:46.562 "cntlid": 31, 00:14:46.562 "qid": 0, 00:14:46.562 "state": "enabled", 00:14:46.562 "thread": "nvmf_tgt_poll_group_000", 00:14:46.562 "listen_address": { 00:14:46.562 "trtype": "TCP", 00:14:46.562 "adrfam": "IPv4", 00:14:46.562 "traddr": "10.0.0.2", 00:14:46.562 "trsvcid": "4420" 00:14:46.562 }, 00:14:46.562 "peer_address": { 00:14:46.562 "trtype": "TCP", 00:14:46.562 "adrfam": "IPv4", 00:14:46.562 "traddr": "10.0.0.1", 00:14:46.562 "trsvcid": "59948" 00:14:46.562 }, 00:14:46.562 "auth": { 00:14:46.562 "state": "completed", 00:14:46.562 "digest": "sha256", 00:14:46.562 "dhgroup": "ffdhe4096" 00:14:46.562 } 00:14:46.562 } 00:14:46.562 ]' 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.562 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.819 12:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:47.754 12:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.012 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.580 00:14:48.580 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:48.580 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:48.580 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.838 { 00:14:48.838 "cntlid": 33, 00:14:48.838 "qid": 0, 00:14:48.838 "state": "enabled", 00:14:48.838 "thread": "nvmf_tgt_poll_group_000", 00:14:48.838 "listen_address": { 00:14:48.838 "trtype": "TCP", 00:14:48.838 "adrfam": "IPv4", 00:14:48.838 "traddr": "10.0.0.2", 00:14:48.838 "trsvcid": "4420" 00:14:48.838 }, 00:14:48.838 "peer_address": { 00:14:48.838 "trtype": "TCP", 00:14:48.838 "adrfam": "IPv4", 00:14:48.838 "traddr": "10.0.0.1", 00:14:48.838 "trsvcid": "59970" 00:14:48.838 }, 00:14:48.838 "auth": { 00:14:48.838 "state": "completed", 00:14:48.838 "digest": "sha256", 00:14:48.838 "dhgroup": "ffdhe6144" 00:14:48.838 } 00:14:48.838 } 00:14:48.838 ]' 00:14:48.838 12:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.838 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.838 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.838 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.096 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.096 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.096 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.096 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.355 12:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.290 12:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.856 00:14:50.856 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.856 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.856 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.114 { 00:14:51.114 "cntlid": 35, 00:14:51.114 "qid": 0, 00:14:51.114 "state": "enabled", 00:14:51.114 "thread": "nvmf_tgt_poll_group_000", 00:14:51.114 "listen_address": { 00:14:51.114 "trtype": "TCP", 00:14:51.114 "adrfam": "IPv4", 00:14:51.114 "traddr": "10.0.0.2", 00:14:51.114 "trsvcid": "4420" 00:14:51.114 }, 00:14:51.114 "peer_address": { 00:14:51.114 "trtype": "TCP", 00:14:51.114 "adrfam": "IPv4", 00:14:51.114 "traddr": "10.0.0.1", 00:14:51.114 "trsvcid": "60010" 00:14:51.114 }, 00:14:51.114 "auth": { 00:14:51.114 "state": "completed", 00:14:51.114 "digest": "sha256", 00:14:51.114 "dhgroup": "ffdhe6144" 00:14:51.114 } 00:14:51.114 } 00:14:51.114 ]' 00:14:51.114 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.373 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.633 12:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.571 12:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.136 00:14:53.136 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.136 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.136 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.394 { 00:14:53.394 "cntlid": 37, 00:14:53.394 "qid": 0, 00:14:53.394 "state": "enabled", 00:14:53.394 "thread": "nvmf_tgt_poll_group_000", 00:14:53.394 "listen_address": { 00:14:53.394 "trtype": "TCP", 00:14:53.394 "adrfam": "IPv4", 00:14:53.394 "traddr": "10.0.0.2", 00:14:53.394 "trsvcid": "4420" 00:14:53.394 }, 00:14:53.394 "peer_address": { 00:14:53.394 "trtype": "TCP", 00:14:53.394 "adrfam": "IPv4", 00:14:53.394 "traddr": "10.0.0.1", 00:14:53.394 "trsvcid": "59300" 00:14:53.394 }, 00:14:53.394 "auth": { 00:14:53.394 "state": "completed", 00:14:53.394 "digest": "sha256", 00:14:53.394 "dhgroup": "ffdhe6144" 00:14:53.394 } 00:14:53.394 } 00:14:53.394 ]' 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.394 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.652 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:53.652 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.652 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.652 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.652 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.909 12:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.844 12:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.102 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.667 00:14:55.667 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.667 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.667 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.924 { 00:14:55.924 "cntlid": 39, 00:14:55.924 "qid": 0, 00:14:55.924 "state": "enabled", 00:14:55.924 "thread": "nvmf_tgt_poll_group_000", 00:14:55.924 "listen_address": { 00:14:55.924 "trtype": "TCP", 00:14:55.924 "adrfam": "IPv4", 00:14:55.924 "traddr": "10.0.0.2", 00:14:55.924 "trsvcid": "4420" 00:14:55.924 }, 00:14:55.924 "peer_address": { 00:14:55.924 "trtype": "TCP", 00:14:55.924 "adrfam": "IPv4", 00:14:55.924 "traddr": "10.0.0.1", 00:14:55.924 "trsvcid": "59328" 00:14:55.924 }, 00:14:55.924 "auth": { 00:14:55.924 "state": "completed", 00:14:55.924 "digest": "sha256", 00:14:55.924 "dhgroup": "ffdhe6144" 00:14:55.924 } 00:14:55.924 } 00:14:55.924 ]' 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.924 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:55.925 12:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.925 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.925 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.925 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.182 12:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.120 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.378 12:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.314 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.314 { 00:14:58.314 "cntlid": 41, 00:14:58.314 "qid": 0, 00:14:58.314 "state": "enabled", 00:14:58.314 "thread": "nvmf_tgt_poll_group_000", 00:14:58.314 "listen_address": { 00:14:58.314 "trtype": "TCP", 00:14:58.314 "adrfam": "IPv4", 00:14:58.314 "traddr": "10.0.0.2", 00:14:58.314 "trsvcid": "4420" 00:14:58.314 }, 00:14:58.314 "peer_address": { 00:14:58.314 "trtype": "TCP", 00:14:58.314 "adrfam": "IPv4", 00:14:58.314 "traddr": "10.0.0.1", 00:14:58.314 "trsvcid": "59356" 00:14:58.314 }, 00:14:58.314 "auth": { 00:14:58.314 "state": "completed", 00:14:58.314 "digest": "sha256", 00:14:58.314 "dhgroup": "ffdhe8192" 00:14:58.314 } 00:14:58.314 } 00:14:58.314 ]' 00:14:58.314 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.572 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.830 12:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.766 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.767 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:59.767 12:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.025 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.962 00:15:00.962 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.962 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.962 12:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.962 { 00:15:00.962 "cntlid": 43, 00:15:00.962 "qid": 0, 00:15:00.962 "state": "enabled", 00:15:00.962 "thread": "nvmf_tgt_poll_group_000", 00:15:00.962 "listen_address": { 00:15:00.962 "trtype": "TCP", 00:15:00.962 "adrfam": "IPv4", 00:15:00.962 "traddr": "10.0.0.2", 00:15:00.962 "trsvcid": "4420" 00:15:00.962 }, 00:15:00.962 "peer_address": { 00:15:00.962 "trtype": "TCP", 00:15:00.962 "adrfam": "IPv4", 00:15:00.962 "traddr": "10.0.0.1", 00:15:00.962 "trsvcid": "59380" 00:15:00.962 }, 00:15:00.962 "auth": { 00:15:00.962 "state": "completed", 00:15:00.962 "digest": "sha256", 00:15:00.962 "dhgroup": "ffdhe8192" 00:15:00.962 } 00:15:00.962 } 00:15:00.962 ]' 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.962 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.220 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.220 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.220 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.220 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.220 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.477 12:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.414 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.672 12:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.614 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.614 { 00:15:03.614 "cntlid": 45, 00:15:03.614 "qid": 0, 00:15:03.614 "state": "enabled", 00:15:03.614 "thread": "nvmf_tgt_poll_group_000", 00:15:03.614 "listen_address": { 00:15:03.614 "trtype": "TCP", 00:15:03.614 "adrfam": "IPv4", 00:15:03.614 "traddr": "10.0.0.2", 00:15:03.614 "trsvcid": "4420" 00:15:03.614 }, 00:15:03.614 "peer_address": { 00:15:03.614 "trtype": "TCP", 00:15:03.614 "adrfam": "IPv4", 00:15:03.614 "traddr": "10.0.0.1", 00:15:03.614 "trsvcid": "43842" 00:15:03.614 }, 00:15:03.614 "auth": { 00:15:03.614 "state": "completed", 00:15:03.614 "digest": "sha256", 00:15:03.614 "dhgroup": "ffdhe8192" 00:15:03.614 } 00:15:03.614 } 00:15:03.614 ]' 00:15:03.614 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.871 12:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.129 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:05.062 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.062 12:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:05.062 12:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.062 12:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:05.062 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.063 12:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.996 00:15:05.997 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.997 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.997 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.253 { 00:15:06.253 "cntlid": 47, 00:15:06.253 "qid": 0, 00:15:06.253 "state": "enabled", 00:15:06.253 "thread": "nvmf_tgt_poll_group_000", 00:15:06.253 "listen_address": { 00:15:06.253 "trtype": "TCP", 00:15:06.253 "adrfam": "IPv4", 00:15:06.253 "traddr": "10.0.0.2", 00:15:06.253 "trsvcid": "4420" 00:15:06.253 }, 00:15:06.253 "peer_address": { 00:15:06.253 "trtype": "TCP", 00:15:06.253 "adrfam": "IPv4", 00:15:06.253 "traddr": "10.0.0.1", 00:15:06.253 "trsvcid": "43862" 00:15:06.253 }, 00:15:06.253 "auth": { 00:15:06.253 "state": "completed", 00:15:06.253 "digest": "sha256", 00:15:06.253 "dhgroup": "ffdhe8192" 00:15:06.253 } 00:15:06.253 } 00:15:06.253 ]' 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.253 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.509 12:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:07.441 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.442 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.442 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.442 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.698 12:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.955 00:15:07.955 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.955 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.955 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.211 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.211 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.211 12:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.211 12:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.468 { 00:15:08.468 "cntlid": 49, 00:15:08.468 "qid": 0, 00:15:08.468 "state": "enabled", 00:15:08.468 "thread": "nvmf_tgt_poll_group_000", 00:15:08.468 "listen_address": { 00:15:08.468 "trtype": "TCP", 00:15:08.468 "adrfam": "IPv4", 00:15:08.468 "traddr": "10.0.0.2", 00:15:08.468 "trsvcid": "4420" 00:15:08.468 }, 00:15:08.468 "peer_address": { 00:15:08.468 "trtype": "TCP", 00:15:08.468 "adrfam": "IPv4", 00:15:08.468 "traddr": "10.0.0.1", 00:15:08.468 "trsvcid": "43878" 00:15:08.468 }, 00:15:08.468 "auth": { 00:15:08.468 "state": "completed", 00:15:08.468 "digest": "sha384", 00:15:08.468 "dhgroup": "null" 00:15:08.468 } 00:15:08.468 } 00:15:08.468 ]' 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.468 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.725 12:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.659 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.938 12:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.197 00:15:10.197 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.197 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.197 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.455 { 00:15:10.455 "cntlid": 51, 00:15:10.455 "qid": 0, 00:15:10.455 "state": "enabled", 00:15:10.455 "thread": "nvmf_tgt_poll_group_000", 00:15:10.455 "listen_address": { 00:15:10.455 "trtype": "TCP", 00:15:10.455 "adrfam": "IPv4", 00:15:10.455 "traddr": "10.0.0.2", 00:15:10.455 "trsvcid": "4420" 00:15:10.455 }, 00:15:10.455 "peer_address": { 00:15:10.455 "trtype": "TCP", 00:15:10.455 "adrfam": "IPv4", 00:15:10.455 "traddr": "10.0.0.1", 00:15:10.455 "trsvcid": "43902" 00:15:10.455 }, 00:15:10.455 "auth": { 00:15:10.455 "state": "completed", 00:15:10.455 "digest": "sha384", 00:15:10.455 "dhgroup": "null" 00:15:10.455 } 00:15:10.455 } 00:15:10.455 ]' 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.455 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.713 12:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.648 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.906 12:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.906 12:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.906 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.906 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.163 00:15:12.163 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.163 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.163 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.730 { 00:15:12.730 "cntlid": 53, 00:15:12.730 "qid": 0, 00:15:12.730 "state": "enabled", 00:15:12.730 "thread": "nvmf_tgt_poll_group_000", 00:15:12.730 "listen_address": { 00:15:12.730 "trtype": "TCP", 00:15:12.730 "adrfam": "IPv4", 00:15:12.730 "traddr": "10.0.0.2", 00:15:12.730 "trsvcid": "4420" 00:15:12.730 }, 00:15:12.730 "peer_address": { 00:15:12.730 "trtype": "TCP", 00:15:12.730 "adrfam": "IPv4", 00:15:12.730 "traddr": "10.0.0.1", 00:15:12.730 "trsvcid": "43918" 00:15:12.730 }, 00:15:12.730 "auth": { 00:15:12.730 "state": "completed", 00:15:12.730 "digest": "sha384", 00:15:12.730 "dhgroup": "null" 00:15:12.730 } 00:15:12.730 } 00:15:12.730 ]' 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.730 12:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.989 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:13.924 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.925 12:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:14.182 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.183 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.183 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.183 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.183 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:14.440 00:15:14.440 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.440 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.440 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.699 { 00:15:14.699 "cntlid": 55, 00:15:14.699 "qid": 0, 00:15:14.699 "state": "enabled", 00:15:14.699 "thread": "nvmf_tgt_poll_group_000", 00:15:14.699 "listen_address": { 00:15:14.699 "trtype": "TCP", 00:15:14.699 "adrfam": "IPv4", 00:15:14.699 "traddr": "10.0.0.2", 00:15:14.699 "trsvcid": "4420" 00:15:14.699 }, 00:15:14.699 "peer_address": { 00:15:14.699 "trtype": "TCP", 00:15:14.699 "adrfam": "IPv4", 00:15:14.699 "traddr": "10.0.0.1", 00:15:14.699 "trsvcid": "35200" 00:15:14.699 }, 00:15:14.699 "auth": { 00:15:14.699 "state": "completed", 00:15:14.699 "digest": "sha384", 00:15:14.699 "dhgroup": "null" 00:15:14.699 } 00:15:14.699 } 00:15:14.699 ]' 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.699 12:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.959 12:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.915 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.173 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:16.432 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.692 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.950 { 00:15:16.950 "cntlid": 57, 00:15:16.950 "qid": 0, 00:15:16.950 "state": "enabled", 00:15:16.950 "thread": "nvmf_tgt_poll_group_000", 00:15:16.950 "listen_address": { 00:15:16.950 "trtype": "TCP", 00:15:16.950 "adrfam": "IPv4", 00:15:16.950 "traddr": "10.0.0.2", 00:15:16.950 "trsvcid": "4420" 00:15:16.950 }, 00:15:16.950 "peer_address": { 00:15:16.950 "trtype": "TCP", 00:15:16.950 "adrfam": "IPv4", 00:15:16.950 "traddr": "10.0.0.1", 00:15:16.950 "trsvcid": "35220" 00:15:16.950 }, 00:15:16.950 "auth": { 00:15:16.950 "state": "completed", 00:15:16.950 "digest": "sha384", 00:15:16.950 "dhgroup": "ffdhe2048" 00:15:16.950 } 00:15:16.950 } 00:15:16.950 ]' 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.950 12:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.950 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.950 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.950 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.208 12:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.147 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.405 12:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.406 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.406 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.664 00:15:18.664 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.664 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.664 12:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.921 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.921 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.921 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.921 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.921 12:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.922 { 00:15:18.922 "cntlid": 59, 00:15:18.922 "qid": 0, 00:15:18.922 "state": "enabled", 00:15:18.922 "thread": "nvmf_tgt_poll_group_000", 00:15:18.922 "listen_address": { 00:15:18.922 "trtype": "TCP", 00:15:18.922 "adrfam": "IPv4", 00:15:18.922 "traddr": "10.0.0.2", 00:15:18.922 "trsvcid": "4420" 00:15:18.922 }, 00:15:18.922 "peer_address": { 00:15:18.922 "trtype": "TCP", 00:15:18.922 "adrfam": "IPv4", 00:15:18.922 "traddr": "10.0.0.1", 00:15:18.922 "trsvcid": "35244" 00:15:18.922 }, 00:15:18.922 "auth": { 00:15:18.922 "state": "completed", 00:15:18.922 "digest": "sha384", 00:15:18.922 "dhgroup": "ffdhe2048" 00:15:18.922 } 00:15:18.922 } 00:15:18.922 ]' 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.922 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.181 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.181 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.181 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.440 12:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.380 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.639 12:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.639 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.639 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.897 00:15:20.897 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.897 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.897 12:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.155 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.155 { 00:15:21.155 "cntlid": 61, 00:15:21.155 "qid": 0, 00:15:21.155 "state": "enabled", 00:15:21.155 "thread": "nvmf_tgt_poll_group_000", 00:15:21.155 "listen_address": { 00:15:21.155 "trtype": "TCP", 00:15:21.155 "adrfam": "IPv4", 00:15:21.155 "traddr": "10.0.0.2", 00:15:21.155 "trsvcid": "4420" 00:15:21.155 }, 00:15:21.155 "peer_address": { 00:15:21.155 "trtype": "TCP", 00:15:21.155 "adrfam": "IPv4", 00:15:21.155 "traddr": "10.0.0.1", 00:15:21.155 "trsvcid": "35264" 00:15:21.155 }, 00:15:21.155 "auth": { 00:15:21.155 "state": "completed", 00:15:21.155 "digest": "sha384", 00:15:21.155 "dhgroup": "ffdhe2048" 00:15:21.155 } 00:15:21.155 } 00:15:21.155 ]' 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.156 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.724 12:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.661 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:22.920 12:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:23.179 00:15:23.179 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.179 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.179 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.438 { 00:15:23.438 "cntlid": 63, 00:15:23.438 "qid": 0, 00:15:23.438 "state": "enabled", 00:15:23.438 "thread": "nvmf_tgt_poll_group_000", 00:15:23.438 "listen_address": { 00:15:23.438 "trtype": "TCP", 00:15:23.438 "adrfam": "IPv4", 00:15:23.438 "traddr": "10.0.0.2", 00:15:23.438 "trsvcid": "4420" 00:15:23.438 }, 00:15:23.438 "peer_address": { 00:15:23.438 "trtype": "TCP", 00:15:23.438 "adrfam": "IPv4", 00:15:23.438 "traddr": "10.0.0.1", 00:15:23.438 "trsvcid": "44210" 00:15:23.438 }, 00:15:23.438 "auth": { 00:15:23.438 "state": "completed", 00:15:23.438 "digest": "sha384", 00:15:23.438 "dhgroup": "ffdhe2048" 00:15:23.438 } 00:15:23.438 } 00:15:23.438 ]' 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.438 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.697 12:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.634 12:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.892 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.460 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.460 { 00:15:25.460 "cntlid": 65, 00:15:25.460 "qid": 0, 00:15:25.460 "state": "enabled", 00:15:25.460 "thread": "nvmf_tgt_poll_group_000", 00:15:25.460 "listen_address": { 00:15:25.460 "trtype": "TCP", 00:15:25.460 "adrfam": "IPv4", 00:15:25.460 "traddr": "10.0.0.2", 00:15:25.460 "trsvcid": "4420" 00:15:25.460 }, 00:15:25.460 "peer_address": { 00:15:25.460 "trtype": "TCP", 00:15:25.460 "adrfam": "IPv4", 00:15:25.460 "traddr": "10.0.0.1", 00:15:25.460 "trsvcid": "44240" 00:15:25.460 }, 00:15:25.460 "auth": { 00:15:25.460 "state": "completed", 00:15:25.460 "digest": "sha384", 00:15:25.460 "dhgroup": "ffdhe3072" 00:15:25.460 } 00:15:25.460 } 00:15:25.460 ]' 00:15:25.460 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.717 12:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.973 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.908 12:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.166 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.733 00:15:27.733 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.733 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.733 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.991 { 00:15:27.991 "cntlid": 67, 00:15:27.991 "qid": 0, 00:15:27.991 "state": "enabled", 00:15:27.991 "thread": "nvmf_tgt_poll_group_000", 00:15:27.991 "listen_address": { 00:15:27.991 "trtype": "TCP", 00:15:27.991 "adrfam": "IPv4", 00:15:27.991 "traddr": "10.0.0.2", 00:15:27.991 "trsvcid": "4420" 00:15:27.991 }, 00:15:27.991 "peer_address": { 00:15:27.991 "trtype": "TCP", 00:15:27.991 "adrfam": "IPv4", 00:15:27.991 "traddr": "10.0.0.1", 00:15:27.991 "trsvcid": "44264" 00:15:27.991 }, 00:15:27.991 "auth": { 00:15:27.991 "state": "completed", 00:15:27.991 "digest": "sha384", 00:15:27.991 "dhgroup": "ffdhe3072" 00:15:27.991 } 00:15:27.991 } 00:15:27.991 ]' 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.991 12:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.991 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.991 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.991 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.991 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.991 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.249 12:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.187 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.444 12:54:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.445 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.445 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.702 00:15:29.702 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.702 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.702 12:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.959 { 00:15:29.959 "cntlid": 69, 00:15:29.959 "qid": 0, 00:15:29.959 "state": "enabled", 00:15:29.959 "thread": "nvmf_tgt_poll_group_000", 00:15:29.959 "listen_address": { 00:15:29.959 "trtype": "TCP", 00:15:29.959 "adrfam": "IPv4", 00:15:29.959 "traddr": "10.0.0.2", 00:15:29.959 "trsvcid": "4420" 00:15:29.959 }, 00:15:29.959 "peer_address": { 00:15:29.959 "trtype": "TCP", 00:15:29.959 "adrfam": "IPv4", 00:15:29.959 "traddr": "10.0.0.1", 00:15:29.959 "trsvcid": "44286" 00:15:29.959 }, 00:15:29.959 "auth": { 00:15:29.959 "state": "completed", 00:15:29.959 "digest": "sha384", 00:15:29.959 "dhgroup": "ffdhe3072" 00:15:29.959 } 00:15:29.959 } 00:15:29.959 ]' 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.959 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.216 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.216 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.216 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.536 12:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.470 12:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:32.035 00:15:32.035 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.035 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.035 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.293 { 00:15:32.293 "cntlid": 71, 00:15:32.293 "qid": 0, 00:15:32.293 "state": "enabled", 00:15:32.293 "thread": "nvmf_tgt_poll_group_000", 00:15:32.293 "listen_address": { 00:15:32.293 "trtype": "TCP", 00:15:32.293 "adrfam": "IPv4", 00:15:32.293 "traddr": "10.0.0.2", 00:15:32.293 "trsvcid": "4420" 00:15:32.293 }, 00:15:32.293 "peer_address": { 00:15:32.293 "trtype": "TCP", 00:15:32.293 "adrfam": "IPv4", 00:15:32.293 "traddr": "10.0.0.1", 00:15:32.293 "trsvcid": "44316" 00:15:32.293 }, 00:15:32.293 "auth": { 00:15:32.293 "state": "completed", 00:15:32.293 "digest": "sha384", 00:15:32.293 "dhgroup": "ffdhe3072" 00:15:32.293 } 00:15:32.293 } 00:15:32.293 ]' 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.293 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.578 12:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.518 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.776 12:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:34.343 00:15:34.343 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.343 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.343 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.601 { 00:15:34.601 "cntlid": 73, 00:15:34.601 "qid": 0, 00:15:34.601 "state": "enabled", 00:15:34.601 "thread": "nvmf_tgt_poll_group_000", 00:15:34.601 "listen_address": { 00:15:34.601 "trtype": "TCP", 00:15:34.601 "adrfam": "IPv4", 00:15:34.601 "traddr": "10.0.0.2", 00:15:34.601 "trsvcid": "4420" 00:15:34.601 }, 00:15:34.601 "peer_address": { 00:15:34.601 "trtype": "TCP", 00:15:34.601 "adrfam": "IPv4", 00:15:34.601 "traddr": "10.0.0.1", 00:15:34.601 "trsvcid": "37944" 00:15:34.601 }, 00:15:34.601 "auth": { 00:15:34.601 "state": "completed", 00:15:34.601 "digest": "sha384", 00:15:34.601 "dhgroup": "ffdhe4096" 00:15:34.601 } 00:15:34.601 } 00:15:34.601 ]' 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.601 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.858 12:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.793 12:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.050 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.307 00:15:36.307 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.307 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.307 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.565 { 00:15:36.565 "cntlid": 75, 00:15:36.565 "qid": 0, 00:15:36.565 "state": "enabled", 00:15:36.565 "thread": "nvmf_tgt_poll_group_000", 00:15:36.565 "listen_address": { 00:15:36.565 "trtype": "TCP", 00:15:36.565 "adrfam": "IPv4", 00:15:36.565 "traddr": "10.0.0.2", 00:15:36.565 "trsvcid": "4420" 00:15:36.565 }, 00:15:36.565 "peer_address": { 00:15:36.565 "trtype": "TCP", 00:15:36.565 "adrfam": "IPv4", 00:15:36.565 "traddr": "10.0.0.1", 00:15:36.565 "trsvcid": "37964" 00:15:36.565 }, 00:15:36.565 "auth": { 00:15:36.565 "state": "completed", 00:15:36.565 "digest": "sha384", 00:15:36.565 "dhgroup": "ffdhe4096" 00:15:36.565 } 00:15:36.565 } 00:15:36.565 ]' 00:15:36.565 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.823 12:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.081 12:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.015 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:38.273 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.274 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.843 00:15:38.843 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.843 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.843 12:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.843 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.843 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.844 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.844 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.844 12:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.844 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.844 { 00:15:38.844 "cntlid": 77, 00:15:38.844 "qid": 0, 00:15:38.844 "state": "enabled", 00:15:38.844 "thread": "nvmf_tgt_poll_group_000", 00:15:38.844 "listen_address": { 00:15:38.844 "trtype": "TCP", 00:15:38.844 "adrfam": "IPv4", 00:15:38.844 "traddr": "10.0.0.2", 00:15:38.844 "trsvcid": "4420" 00:15:38.844 }, 00:15:38.844 "peer_address": { 00:15:38.844 "trtype": "TCP", 00:15:38.844 "adrfam": "IPv4", 00:15:38.844 "traddr": "10.0.0.1", 00:15:38.844 "trsvcid": "37984" 00:15:38.844 }, 00:15:38.844 "auth": { 00:15:38.844 "state": "completed", 00:15:38.844 "digest": "sha384", 00:15:38.844 "dhgroup": "ffdhe4096" 00:15:38.844 } 00:15:38.844 } 00:15:38.844 ]' 00:15:38.844 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.102 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.359 12:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.298 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.556 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.814 00:15:40.814 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.814 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.814 12:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.073 { 00:15:41.073 "cntlid": 79, 00:15:41.073 "qid": 0, 00:15:41.073 "state": "enabled", 00:15:41.073 "thread": "nvmf_tgt_poll_group_000", 00:15:41.073 "listen_address": { 00:15:41.073 "trtype": "TCP", 00:15:41.073 "adrfam": "IPv4", 00:15:41.073 "traddr": "10.0.0.2", 00:15:41.073 "trsvcid": "4420" 00:15:41.073 }, 00:15:41.073 "peer_address": { 00:15:41.073 "trtype": "TCP", 00:15:41.073 "adrfam": "IPv4", 00:15:41.073 "traddr": "10.0.0.1", 00:15:41.073 "trsvcid": "38008" 00:15:41.073 }, 00:15:41.073 "auth": { 00:15:41.073 "state": "completed", 00:15:41.073 "digest": "sha384", 00:15:41.073 "dhgroup": "ffdhe4096" 00:15:41.073 } 00:15:41.073 } 00:15:41.073 ]' 00:15:41.073 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.331 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.590 12:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.522 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.778 12:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.343 00:15:43.343 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.343 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.343 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.601 { 00:15:43.601 "cntlid": 81, 00:15:43.601 "qid": 0, 00:15:43.601 "state": "enabled", 00:15:43.601 "thread": "nvmf_tgt_poll_group_000", 00:15:43.601 "listen_address": { 00:15:43.601 "trtype": "TCP", 00:15:43.601 "adrfam": "IPv4", 00:15:43.601 "traddr": "10.0.0.2", 00:15:43.601 "trsvcid": "4420" 00:15:43.601 }, 00:15:43.601 "peer_address": { 00:15:43.601 "trtype": "TCP", 00:15:43.601 "adrfam": "IPv4", 00:15:43.601 "traddr": "10.0.0.1", 00:15:43.601 "trsvcid": "46150" 00:15:43.601 }, 00:15:43.601 "auth": { 00:15:43.601 "state": "completed", 00:15:43.601 "digest": "sha384", 00:15:43.601 "dhgroup": "ffdhe6144" 00:15:43.601 } 00:15:43.601 } 00:15:43.601 ]' 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.601 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.858 12:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.791 12:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.049 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.616 00:15:45.617 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.617 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.617 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.874 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.874 { 00:15:45.874 "cntlid": 83, 00:15:45.874 "qid": 0, 00:15:45.874 "state": "enabled", 00:15:45.874 "thread": "nvmf_tgt_poll_group_000", 00:15:45.874 "listen_address": { 00:15:45.874 "trtype": "TCP", 00:15:45.874 "adrfam": "IPv4", 00:15:45.874 "traddr": "10.0.0.2", 00:15:45.874 "trsvcid": "4420" 00:15:45.874 }, 00:15:45.874 "peer_address": { 00:15:45.874 "trtype": "TCP", 00:15:45.875 "adrfam": "IPv4", 00:15:45.875 "traddr": "10.0.0.1", 00:15:45.875 "trsvcid": "46174" 00:15:45.875 }, 00:15:45.875 "auth": { 00:15:45.875 "state": "completed", 00:15:45.875 "digest": "sha384", 00:15:45.875 "dhgroup": "ffdhe6144" 00:15:45.875 } 00:15:45.875 } 00:15:45.875 ]' 00:15:45.875 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.875 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.875 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.875 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.875 12:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.875 12:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.875 12:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.875 12:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.133 12:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.066 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.324 12:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.891 00:15:47.891 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.891 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.891 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.147 { 00:15:48.147 "cntlid": 85, 00:15:48.147 "qid": 0, 00:15:48.147 "state": "enabled", 00:15:48.147 "thread": "nvmf_tgt_poll_group_000", 00:15:48.147 "listen_address": { 00:15:48.147 "trtype": "TCP", 00:15:48.147 "adrfam": "IPv4", 00:15:48.147 "traddr": "10.0.0.2", 00:15:48.147 "trsvcid": "4420" 00:15:48.147 }, 00:15:48.147 "peer_address": { 00:15:48.147 "trtype": "TCP", 00:15:48.147 "adrfam": "IPv4", 00:15:48.147 "traddr": "10.0.0.1", 00:15:48.147 "trsvcid": "46198" 00:15:48.147 }, 00:15:48.147 "auth": { 00:15:48.147 "state": "completed", 00:15:48.147 "digest": "sha384", 00:15:48.147 "dhgroup": "ffdhe6144" 00:15:48.147 } 00:15:48.147 } 00:15:48.147 ]' 00:15:48.147 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.404 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.698 12:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.633 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.890 12:55:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.890 12:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.890 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.890 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.455 00:15:50.455 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.455 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.455 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.712 { 00:15:50.712 "cntlid": 87, 00:15:50.712 "qid": 0, 00:15:50.712 "state": "enabled", 00:15:50.712 "thread": "nvmf_tgt_poll_group_000", 00:15:50.712 "listen_address": { 00:15:50.712 "trtype": "TCP", 00:15:50.712 "adrfam": "IPv4", 00:15:50.712 "traddr": "10.0.0.2", 00:15:50.712 "trsvcid": "4420" 00:15:50.712 }, 00:15:50.712 "peer_address": { 00:15:50.712 "trtype": "TCP", 00:15:50.712 "adrfam": "IPv4", 00:15:50.712 "traddr": "10.0.0.1", 00:15:50.712 "trsvcid": "46240" 00:15:50.712 }, 00:15:50.712 "auth": { 00:15:50.712 "state": "completed", 00:15:50.712 "digest": "sha384", 00:15:50.712 "dhgroup": "ffdhe6144" 00:15:50.712 } 00:15:50.712 } 00:15:50.712 ]' 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.712 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.970 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.970 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.970 12:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.228 12:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.164 12:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.096 00:15:53.097 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.097 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.097 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.353 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.353 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.354 { 00:15:53.354 "cntlid": 89, 00:15:53.354 "qid": 0, 00:15:53.354 "state": "enabled", 00:15:53.354 "thread": "nvmf_tgt_poll_group_000", 00:15:53.354 "listen_address": { 00:15:53.354 "trtype": "TCP", 00:15:53.354 "adrfam": "IPv4", 00:15:53.354 "traddr": "10.0.0.2", 00:15:53.354 "trsvcid": "4420" 00:15:53.354 }, 00:15:53.354 "peer_address": { 00:15:53.354 "trtype": "TCP", 00:15:53.354 "adrfam": "IPv4", 00:15:53.354 "traddr": "10.0.0.1", 00:15:53.354 "trsvcid": "41772" 00:15:53.354 }, 00:15:53.354 "auth": { 00:15:53.354 "state": "completed", 00:15:53.354 "digest": "sha384", 00:15:53.354 "dhgroup": "ffdhe8192" 00:15:53.354 } 00:15:53.354 } 00:15:53.354 ]' 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.354 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.610 12:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.541 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.799 12:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.761 00:15:55.761 12:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.761 12:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.761 12:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.018 { 00:15:56.018 "cntlid": 91, 00:15:56.018 "qid": 0, 00:15:56.018 "state": "enabled", 00:15:56.018 "thread": "nvmf_tgt_poll_group_000", 00:15:56.018 "listen_address": { 00:15:56.018 "trtype": "TCP", 00:15:56.018 "adrfam": "IPv4", 00:15:56.018 "traddr": "10.0.0.2", 00:15:56.018 "trsvcid": "4420" 00:15:56.018 }, 00:15:56.018 "peer_address": { 00:15:56.018 "trtype": "TCP", 00:15:56.018 "adrfam": "IPv4", 00:15:56.018 "traddr": "10.0.0.1", 00:15:56.018 "trsvcid": "41796" 00:15:56.018 }, 00:15:56.018 "auth": { 00:15:56.018 "state": "completed", 00:15:56.018 "digest": "sha384", 00:15:56.018 "dhgroup": "ffdhe8192" 00:15:56.018 } 00:15:56.018 } 00:15:56.018 ]' 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.018 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.276 12:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.208 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.466 12:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.398 00:15:58.398 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.398 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.398 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.656 { 00:15:58.656 "cntlid": 93, 00:15:58.656 "qid": 0, 00:15:58.656 "state": "enabled", 00:15:58.656 "thread": "nvmf_tgt_poll_group_000", 00:15:58.656 "listen_address": { 00:15:58.656 "trtype": "TCP", 00:15:58.656 "adrfam": "IPv4", 00:15:58.656 "traddr": "10.0.0.2", 00:15:58.656 "trsvcid": "4420" 00:15:58.656 }, 00:15:58.656 "peer_address": { 00:15:58.656 "trtype": "TCP", 00:15:58.656 "adrfam": "IPv4", 00:15:58.656 "traddr": "10.0.0.1", 00:15:58.656 "trsvcid": "41824" 00:15:58.656 }, 00:15:58.656 "auth": { 00:15:58.656 "state": "completed", 00:15:58.656 "digest": "sha384", 00:15:58.656 "dhgroup": "ffdhe8192" 00:15:58.656 } 00:15:58.656 } 00:15:58.656 ]' 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.656 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.914 12:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.845 12:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.103 12:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.035 00:16:01.035 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.035 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.035 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.293 { 00:16:01.293 "cntlid": 95, 00:16:01.293 "qid": 0, 00:16:01.293 "state": "enabled", 00:16:01.293 "thread": "nvmf_tgt_poll_group_000", 00:16:01.293 "listen_address": { 00:16:01.293 "trtype": "TCP", 00:16:01.293 "adrfam": "IPv4", 00:16:01.293 "traddr": "10.0.0.2", 00:16:01.293 "trsvcid": "4420" 00:16:01.293 }, 00:16:01.293 "peer_address": { 00:16:01.293 "trtype": "TCP", 00:16:01.293 "adrfam": "IPv4", 00:16:01.293 "traddr": "10.0.0.1", 00:16:01.293 "trsvcid": "41864" 00:16:01.293 }, 00:16:01.293 "auth": { 00:16:01.293 "state": "completed", 00:16:01.293 "digest": "sha384", 00:16:01.293 "dhgroup": "ffdhe8192" 00:16:01.293 } 00:16:01.293 } 00:16:01.293 ]' 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.293 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.859 12:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.791 12:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.048 00:16:03.048 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.048 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.048 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.305 { 00:16:03.305 "cntlid": 97, 00:16:03.305 "qid": 0, 00:16:03.305 "state": "enabled", 00:16:03.305 "thread": "nvmf_tgt_poll_group_000", 00:16:03.305 "listen_address": { 00:16:03.305 "trtype": "TCP", 00:16:03.305 "adrfam": "IPv4", 00:16:03.305 "traddr": "10.0.0.2", 00:16:03.305 "trsvcid": "4420" 00:16:03.305 }, 00:16:03.305 "peer_address": { 00:16:03.305 "trtype": "TCP", 00:16:03.305 "adrfam": "IPv4", 00:16:03.305 "traddr": "10.0.0.1", 00:16:03.305 "trsvcid": "53190" 00:16:03.305 }, 00:16:03.305 "auth": { 00:16:03.305 "state": "completed", 00:16:03.305 "digest": "sha512", 00:16:03.305 "dhgroup": "null" 00:16:03.305 } 00:16:03.305 } 00:16:03.305 ]' 00:16:03.305 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.563 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.821 12:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:04.753 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.754 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.011 12:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.011 12:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.011 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.012 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.269 00:16:05.269 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.269 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.269 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.525 { 00:16:05.525 "cntlid": 99, 00:16:05.525 "qid": 0, 00:16:05.525 "state": "enabled", 00:16:05.525 "thread": "nvmf_tgt_poll_group_000", 00:16:05.525 "listen_address": { 00:16:05.525 "trtype": "TCP", 00:16:05.525 "adrfam": "IPv4", 00:16:05.525 "traddr": "10.0.0.2", 00:16:05.525 "trsvcid": "4420" 00:16:05.525 }, 00:16:05.525 "peer_address": { 00:16:05.525 "trtype": "TCP", 00:16:05.525 "adrfam": "IPv4", 00:16:05.525 "traddr": "10.0.0.1", 00:16:05.525 "trsvcid": "53208" 00:16:05.525 }, 00:16:05.525 "auth": { 00:16:05.525 "state": "completed", 00:16:05.525 "digest": "sha512", 00:16:05.525 "dhgroup": "null" 00:16:05.525 } 00:16:05.525 } 00:16:05.525 ]' 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.525 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.783 12:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.715 12:55:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.973 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.538 00:16:07.538 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.538 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.538 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.795 { 00:16:07.795 "cntlid": 101, 00:16:07.795 "qid": 0, 00:16:07.795 "state": "enabled", 00:16:07.795 "thread": "nvmf_tgt_poll_group_000", 00:16:07.795 "listen_address": { 00:16:07.795 "trtype": "TCP", 00:16:07.795 "adrfam": "IPv4", 00:16:07.795 "traddr": "10.0.0.2", 00:16:07.795 "trsvcid": "4420" 00:16:07.795 }, 00:16:07.795 "peer_address": { 00:16:07.795 "trtype": "TCP", 00:16:07.795 "adrfam": "IPv4", 00:16:07.795 "traddr": "10.0.0.1", 00:16:07.795 "trsvcid": "53236" 00:16:07.795 }, 00:16:07.795 "auth": { 00:16:07.795 "state": "completed", 00:16:07.795 "digest": "sha512", 00:16:07.795 "dhgroup": "null" 00:16:07.795 } 00:16:07.795 } 00:16:07.795 ]' 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.795 12:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.052 12:55:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.984 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.251 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.509 00:16:09.509 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.509 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.509 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.766 { 00:16:09.766 "cntlid": 103, 00:16:09.766 "qid": 0, 00:16:09.766 "state": "enabled", 00:16:09.766 "thread": "nvmf_tgt_poll_group_000", 00:16:09.766 "listen_address": { 00:16:09.766 "trtype": "TCP", 00:16:09.766 "adrfam": "IPv4", 00:16:09.766 "traddr": "10.0.0.2", 00:16:09.766 "trsvcid": "4420" 00:16:09.766 }, 00:16:09.766 "peer_address": { 00:16:09.766 "trtype": "TCP", 00:16:09.766 "adrfam": "IPv4", 00:16:09.766 "traddr": "10.0.0.1", 00:16:09.766 "trsvcid": "53262" 00:16:09.766 }, 00:16:09.766 "auth": { 00:16:09.766 "state": "completed", 00:16:09.766 "digest": "sha512", 00:16:09.766 "dhgroup": "null" 00:16:09.766 } 00:16:09.766 } 00:16:09.766 ]' 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.766 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.023 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.023 12:55:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.023 12:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.023 12:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.023 12:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.280 12:55:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.213 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.470 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.728 00:16:11.728 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.728 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.728 12:55:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.985 { 00:16:11.985 "cntlid": 105, 00:16:11.985 "qid": 0, 00:16:11.985 "state": "enabled", 00:16:11.985 "thread": "nvmf_tgt_poll_group_000", 00:16:11.985 "listen_address": { 00:16:11.985 "trtype": "TCP", 00:16:11.985 "adrfam": "IPv4", 00:16:11.985 "traddr": "10.0.0.2", 00:16:11.985 "trsvcid": "4420" 00:16:11.985 }, 00:16:11.985 "peer_address": { 00:16:11.985 "trtype": "TCP", 00:16:11.985 "adrfam": "IPv4", 00:16:11.985 "traddr": "10.0.0.1", 00:16:11.985 "trsvcid": "53296" 00:16:11.985 }, 00:16:11.985 "auth": { 00:16:11.985 "state": "completed", 00:16:11.985 "digest": "sha512", 00:16:11.985 "dhgroup": "ffdhe2048" 00:16:11.985 } 00:16:11.985 } 00:16:11.985 ]' 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.985 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.242 12:55:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.216 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.472 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:13.728 00:16:13.728 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.728 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.728 12:55:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.985 { 00:16:13.985 "cntlid": 107, 00:16:13.985 "qid": 0, 00:16:13.985 "state": "enabled", 00:16:13.985 "thread": "nvmf_tgt_poll_group_000", 00:16:13.985 "listen_address": { 00:16:13.985 "trtype": "TCP", 00:16:13.985 "adrfam": "IPv4", 00:16:13.985 "traddr": "10.0.0.2", 00:16:13.985 "trsvcid": "4420" 00:16:13.985 }, 00:16:13.985 "peer_address": { 00:16:13.985 "trtype": "TCP", 00:16:13.985 "adrfam": "IPv4", 00:16:13.985 "traddr": "10.0.0.1", 00:16:13.985 "trsvcid": "49100" 00:16:13.985 }, 00:16:13.985 "auth": { 00:16:13.985 "state": "completed", 00:16:13.985 "digest": "sha512", 00:16:13.985 "dhgroup": "ffdhe2048" 00:16:13.985 } 00:16:13.985 } 00:16:13.985 ]' 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.985 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.241 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.241 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.241 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.241 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.241 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.497 12:55:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:15.429 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.015 00:16:16.015 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.015 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.015 12:55:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.015 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.015 { 00:16:16.015 "cntlid": 109, 00:16:16.015 "qid": 0, 00:16:16.015 "state": "enabled", 00:16:16.015 "thread": "nvmf_tgt_poll_group_000", 00:16:16.015 "listen_address": { 00:16:16.015 "trtype": "TCP", 00:16:16.015 "adrfam": "IPv4", 00:16:16.015 "traddr": "10.0.0.2", 00:16:16.015 "trsvcid": "4420" 00:16:16.016 }, 00:16:16.016 "peer_address": { 00:16:16.016 "trtype": "TCP", 00:16:16.016 "adrfam": "IPv4", 00:16:16.016 "traddr": "10.0.0.1", 00:16:16.016 "trsvcid": "49124" 00:16:16.016 }, 00:16:16.016 "auth": { 00:16:16.016 "state": "completed", 00:16:16.016 "digest": "sha512", 00:16:16.016 "dhgroup": "ffdhe2048" 00:16:16.016 } 00:16:16.016 } 00:16:16.016 ]' 00:16:16.016 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.273 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.531 12:55:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.466 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.724 12:55:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.982 00:16:17.982 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.982 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.982 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.258 { 00:16:18.258 "cntlid": 111, 00:16:18.258 "qid": 0, 00:16:18.258 "state": "enabled", 00:16:18.258 "thread": "nvmf_tgt_poll_group_000", 00:16:18.258 "listen_address": { 00:16:18.258 "trtype": "TCP", 00:16:18.258 "adrfam": "IPv4", 00:16:18.258 "traddr": "10.0.0.2", 00:16:18.258 "trsvcid": "4420" 00:16:18.258 }, 00:16:18.258 "peer_address": { 00:16:18.258 "trtype": "TCP", 00:16:18.258 "adrfam": "IPv4", 00:16:18.258 "traddr": "10.0.0.1", 00:16:18.258 "trsvcid": "49150" 00:16:18.258 }, 00:16:18.258 "auth": { 00:16:18.258 "state": "completed", 00:16:18.258 "digest": "sha512", 00:16:18.258 "dhgroup": "ffdhe2048" 00:16:18.258 } 00:16:18.258 } 00:16:18.258 ]' 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.258 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.830 12:55:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.762 12:55:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.324 00:16:20.324 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.324 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.324 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.580 { 00:16:20.580 "cntlid": 113, 00:16:20.580 "qid": 0, 00:16:20.580 "state": "enabled", 00:16:20.580 "thread": "nvmf_tgt_poll_group_000", 00:16:20.580 "listen_address": { 00:16:20.580 "trtype": "TCP", 00:16:20.580 "adrfam": "IPv4", 00:16:20.580 "traddr": "10.0.0.2", 00:16:20.580 "trsvcid": "4420" 00:16:20.580 }, 00:16:20.580 "peer_address": { 00:16:20.580 "trtype": "TCP", 00:16:20.580 "adrfam": "IPv4", 00:16:20.580 "traddr": "10.0.0.1", 00:16:20.580 "trsvcid": "49174" 00:16:20.580 }, 00:16:20.580 "auth": { 00:16:20.580 "state": "completed", 00:16:20.580 "digest": "sha512", 00:16:20.580 "dhgroup": "ffdhe3072" 00:16:20.580 } 00:16:20.580 } 00:16:20.580 ]' 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.580 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.838 12:55:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.769 12:55:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.027 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.284 00:16:22.284 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.284 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.284 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.542 { 00:16:22.542 "cntlid": 115, 00:16:22.542 "qid": 0, 00:16:22.542 "state": "enabled", 00:16:22.542 "thread": "nvmf_tgt_poll_group_000", 00:16:22.542 "listen_address": { 00:16:22.542 "trtype": "TCP", 00:16:22.542 "adrfam": "IPv4", 00:16:22.542 "traddr": "10.0.0.2", 00:16:22.542 "trsvcid": "4420" 00:16:22.542 }, 00:16:22.542 "peer_address": { 00:16:22.542 "trtype": "TCP", 00:16:22.542 "adrfam": "IPv4", 00:16:22.542 "traddr": "10.0.0.1", 00:16:22.542 "trsvcid": "49200" 00:16:22.542 }, 00:16:22.542 "auth": { 00:16:22.542 "state": "completed", 00:16:22.542 "digest": "sha512", 00:16:22.542 "dhgroup": "ffdhe3072" 00:16:22.542 } 00:16:22.542 } 00:16:22.542 ]' 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.542 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.799 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.799 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.799 12:55:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.055 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.987 12:55:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.245 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.502 00:16:24.502 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.502 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.502 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.758 { 00:16:24.758 "cntlid": 117, 00:16:24.758 "qid": 0, 00:16:24.758 "state": "enabled", 00:16:24.758 "thread": "nvmf_tgt_poll_group_000", 00:16:24.758 "listen_address": { 00:16:24.758 "trtype": "TCP", 00:16:24.758 "adrfam": "IPv4", 00:16:24.758 "traddr": "10.0.0.2", 00:16:24.758 "trsvcid": "4420" 00:16:24.758 }, 00:16:24.758 "peer_address": { 00:16:24.758 "trtype": "TCP", 00:16:24.758 "adrfam": "IPv4", 00:16:24.758 "traddr": "10.0.0.1", 00:16:24.758 "trsvcid": "48360" 00:16:24.758 }, 00:16:24.758 "auth": { 00:16:24.758 "state": "completed", 00:16:24.758 "digest": "sha512", 00:16:24.758 "dhgroup": "ffdhe3072" 00:16:24.758 } 00:16:24.758 } 00:16:24.758 ]' 00:16:24.758 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.015 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.015 12:55:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.015 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.015 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.015 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.015 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.015 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.272 12:55:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.204 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.461 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.719 00:16:26.719 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.719 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.719 12:55:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.977 { 00:16:26.977 "cntlid": 119, 00:16:26.977 "qid": 0, 00:16:26.977 "state": "enabled", 00:16:26.977 "thread": "nvmf_tgt_poll_group_000", 00:16:26.977 "listen_address": { 00:16:26.977 "trtype": "TCP", 00:16:26.977 "adrfam": "IPv4", 00:16:26.977 "traddr": "10.0.0.2", 00:16:26.977 "trsvcid": "4420" 00:16:26.977 }, 00:16:26.977 "peer_address": { 00:16:26.977 "trtype": "TCP", 00:16:26.977 "adrfam": "IPv4", 00:16:26.977 "traddr": "10.0.0.1", 00:16:26.977 "trsvcid": "48384" 00:16:26.977 }, 00:16:26.977 "auth": { 00:16:26.977 "state": "completed", 00:16:26.977 "digest": "sha512", 00:16:26.977 "dhgroup": "ffdhe3072" 00:16:26.977 } 00:16:26.977 } 00:16:26.977 ]' 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.977 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.233 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:27.233 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.233 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.233 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.233 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.490 12:55:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.422 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.679 12:55:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.936 00:16:28.936 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.936 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.936 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.193 { 00:16:29.193 "cntlid": 121, 00:16:29.193 "qid": 0, 00:16:29.193 "state": "enabled", 00:16:29.193 "thread": "nvmf_tgt_poll_group_000", 00:16:29.193 "listen_address": { 00:16:29.193 "trtype": "TCP", 00:16:29.193 "adrfam": "IPv4", 00:16:29.193 "traddr": "10.0.0.2", 00:16:29.193 "trsvcid": "4420" 00:16:29.193 }, 00:16:29.193 "peer_address": { 00:16:29.193 "trtype": "TCP", 00:16:29.193 "adrfam": "IPv4", 00:16:29.193 "traddr": "10.0.0.1", 00:16:29.193 "trsvcid": "48426" 00:16:29.193 }, 00:16:29.193 "auth": { 00:16:29.193 "state": "completed", 00:16:29.193 "digest": "sha512", 00:16:29.193 "dhgroup": "ffdhe4096" 00:16:29.193 } 00:16:29.193 } 00:16:29.193 ]' 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.193 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.450 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.450 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.451 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.451 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.451 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.708 12:55:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.637 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.893 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.894 12:55:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.149 00:16:31.149 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.149 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.149 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.406 { 00:16:31.406 "cntlid": 123, 00:16:31.406 "qid": 0, 00:16:31.406 "state": "enabled", 00:16:31.406 "thread": "nvmf_tgt_poll_group_000", 00:16:31.406 "listen_address": { 00:16:31.406 "trtype": "TCP", 00:16:31.406 "adrfam": "IPv4", 00:16:31.406 "traddr": "10.0.0.2", 00:16:31.406 "trsvcid": "4420" 00:16:31.406 }, 00:16:31.406 "peer_address": { 00:16:31.406 "trtype": "TCP", 00:16:31.406 "adrfam": "IPv4", 00:16:31.406 "traddr": "10.0.0.1", 00:16:31.406 "trsvcid": "48452" 00:16:31.406 }, 00:16:31.406 "auth": { 00:16:31.406 "state": "completed", 00:16:31.406 "digest": "sha512", 00:16:31.406 "dhgroup": "ffdhe4096" 00:16:31.406 } 00:16:31.406 } 00:16:31.406 ]' 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.406 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.662 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.662 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.662 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.919 12:55:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.852 12:55:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.416 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.416 12:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.673 { 00:16:33.673 "cntlid": 125, 00:16:33.673 "qid": 0, 00:16:33.673 "state": "enabled", 00:16:33.673 "thread": "nvmf_tgt_poll_group_000", 00:16:33.673 "listen_address": { 00:16:33.673 "trtype": "TCP", 00:16:33.673 "adrfam": "IPv4", 00:16:33.673 "traddr": "10.0.0.2", 00:16:33.673 "trsvcid": "4420" 00:16:33.673 }, 00:16:33.673 "peer_address": { 00:16:33.673 "trtype": "TCP", 00:16:33.673 "adrfam": "IPv4", 00:16:33.673 "traddr": "10.0.0.1", 00:16:33.673 "trsvcid": "52944" 00:16:33.673 }, 00:16:33.673 "auth": { 00:16:33.673 "state": "completed", 00:16:33.673 "digest": "sha512", 00:16:33.673 "dhgroup": "ffdhe4096" 00:16:33.673 } 00:16:33.673 } 00:16:33.673 ]' 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.673 12:55:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.930 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.863 12:55:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.121 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.687 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.687 { 00:16:35.687 "cntlid": 127, 00:16:35.687 "qid": 0, 00:16:35.687 "state": "enabled", 00:16:35.687 "thread": "nvmf_tgt_poll_group_000", 00:16:35.687 "listen_address": { 00:16:35.687 "trtype": "TCP", 00:16:35.687 "adrfam": "IPv4", 00:16:35.687 "traddr": "10.0.0.2", 00:16:35.687 "trsvcid": "4420" 00:16:35.687 }, 00:16:35.687 "peer_address": { 00:16:35.687 "trtype": "TCP", 00:16:35.687 "adrfam": "IPv4", 00:16:35.687 "traddr": "10.0.0.1", 00:16:35.687 "trsvcid": "52968" 00:16:35.687 }, 00:16:35.687 "auth": { 00:16:35.687 "state": "completed", 00:16:35.687 "digest": "sha512", 00:16:35.687 "dhgroup": "ffdhe4096" 00:16:35.687 } 00:16:35.687 } 00:16:35.687 ]' 00:16:35.687 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.944 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.944 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.944 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.944 12:55:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.944 12:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.944 12:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.944 12:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.202 12:55:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.133 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.390 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.955 00:16:37.955 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.955 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.955 12:55:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.213 { 00:16:38.213 "cntlid": 129, 00:16:38.213 "qid": 0, 00:16:38.213 "state": "enabled", 00:16:38.213 "thread": "nvmf_tgt_poll_group_000", 00:16:38.213 "listen_address": { 00:16:38.213 "trtype": "TCP", 00:16:38.213 "adrfam": "IPv4", 00:16:38.213 "traddr": "10.0.0.2", 00:16:38.213 "trsvcid": "4420" 00:16:38.213 }, 00:16:38.213 "peer_address": { 00:16:38.213 "trtype": "TCP", 00:16:38.213 "adrfam": "IPv4", 00:16:38.213 "traddr": "10.0.0.1", 00:16:38.213 "trsvcid": "53006" 00:16:38.213 }, 00:16:38.213 "auth": { 00:16:38.213 "state": "completed", 00:16:38.213 "digest": "sha512", 00:16:38.213 "dhgroup": "ffdhe6144" 00:16:38.213 } 00:16:38.213 } 00:16:38.213 ]' 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.213 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.471 12:55:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.405 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.663 12:55:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.228 00:16:40.228 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.228 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.228 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.485 { 00:16:40.485 "cntlid": 131, 00:16:40.485 "qid": 0, 00:16:40.485 "state": "enabled", 00:16:40.485 "thread": "nvmf_tgt_poll_group_000", 00:16:40.485 "listen_address": { 00:16:40.485 "trtype": "TCP", 00:16:40.485 "adrfam": "IPv4", 00:16:40.485 "traddr": "10.0.0.2", 00:16:40.485 "trsvcid": "4420" 00:16:40.485 }, 00:16:40.485 "peer_address": { 00:16:40.485 "trtype": "TCP", 00:16:40.485 "adrfam": "IPv4", 00:16:40.485 "traddr": "10.0.0.1", 00:16:40.485 "trsvcid": "53048" 00:16:40.485 }, 00:16:40.485 "auth": { 00:16:40.485 "state": "completed", 00:16:40.485 "digest": "sha512", 00:16:40.485 "dhgroup": "ffdhe6144" 00:16:40.485 } 00:16:40.485 } 00:16:40.485 ]' 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.485 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.751 12:55:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.739 12:55:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.997 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.561 00:16:42.562 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.562 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.562 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.819 { 00:16:42.819 "cntlid": 133, 00:16:42.819 "qid": 0, 00:16:42.819 "state": "enabled", 00:16:42.819 "thread": "nvmf_tgt_poll_group_000", 00:16:42.819 "listen_address": { 00:16:42.819 "trtype": "TCP", 00:16:42.819 "adrfam": "IPv4", 00:16:42.819 "traddr": "10.0.0.2", 00:16:42.819 "trsvcid": "4420" 00:16:42.819 }, 00:16:42.819 "peer_address": { 00:16:42.819 "trtype": "TCP", 00:16:42.819 "adrfam": "IPv4", 00:16:42.819 "traddr": "10.0.0.1", 00:16:42.819 "trsvcid": "53064" 00:16:42.819 }, 00:16:42.819 "auth": { 00:16:42.819 "state": "completed", 00:16:42.819 "digest": "sha512", 00:16:42.819 "dhgroup": "ffdhe6144" 00:16:42.819 } 00:16:42.819 } 00:16:42.819 ]' 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.819 12:56:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.819 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.819 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.076 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.076 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.076 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.334 12:56:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.266 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.524 12:56:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.089 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.089 { 00:16:45.089 "cntlid": 135, 00:16:45.089 "qid": 0, 00:16:45.089 "state": "enabled", 00:16:45.089 "thread": "nvmf_tgt_poll_group_000", 00:16:45.089 "listen_address": { 00:16:45.089 "trtype": "TCP", 00:16:45.089 "adrfam": "IPv4", 00:16:45.089 "traddr": "10.0.0.2", 00:16:45.089 "trsvcid": "4420" 00:16:45.089 }, 00:16:45.089 "peer_address": { 00:16:45.089 "trtype": "TCP", 00:16:45.089 "adrfam": "IPv4", 00:16:45.089 "traddr": "10.0.0.1", 00:16:45.089 "trsvcid": "56742" 00:16:45.089 }, 00:16:45.089 "auth": { 00:16:45.089 "state": "completed", 00:16:45.089 "digest": "sha512", 00:16:45.089 "dhgroup": "ffdhe6144" 00:16:45.089 } 00:16:45.089 } 00:16:45.089 ]' 00:16:45.089 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.345 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.346 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.602 12:56:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.534 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.792 12:56:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.357 00:16:47.357 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.357 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.357 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.615 { 00:16:47.615 "cntlid": 137, 00:16:47.615 "qid": 0, 00:16:47.615 "state": "enabled", 00:16:47.615 "thread": "nvmf_tgt_poll_group_000", 00:16:47.615 "listen_address": { 00:16:47.615 "trtype": "TCP", 00:16:47.615 "adrfam": "IPv4", 00:16:47.615 "traddr": "10.0.0.2", 00:16:47.615 "trsvcid": "4420" 00:16:47.615 }, 00:16:47.615 "peer_address": { 00:16:47.615 "trtype": "TCP", 00:16:47.615 "adrfam": "IPv4", 00:16:47.615 "traddr": "10.0.0.1", 00:16:47.615 "trsvcid": "56768" 00:16:47.615 }, 00:16:47.615 "auth": { 00:16:47.615 "state": "completed", 00:16:47.615 "digest": "sha512", 00:16:47.615 "dhgroup": "ffdhe8192" 00:16:47.615 } 00:16:47.615 } 00:16:47.615 ]' 00:16:47.615 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.873 12:56:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.131 12:56:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.064 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.322 12:56:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.887 00:16:50.144 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.145 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.145 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.401 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.402 { 00:16:50.402 "cntlid": 139, 00:16:50.402 "qid": 0, 00:16:50.402 "state": "enabled", 00:16:50.402 "thread": "nvmf_tgt_poll_group_000", 00:16:50.402 "listen_address": { 00:16:50.402 "trtype": "TCP", 00:16:50.402 "adrfam": "IPv4", 00:16:50.402 "traddr": "10.0.0.2", 00:16:50.402 "trsvcid": "4420" 00:16:50.402 }, 00:16:50.402 "peer_address": { 00:16:50.402 "trtype": "TCP", 00:16:50.402 "adrfam": "IPv4", 00:16:50.402 "traddr": "10.0.0.1", 00:16:50.402 "trsvcid": "56790" 00:16:50.402 }, 00:16:50.402 "auth": { 00:16:50.402 "state": "completed", 00:16:50.402 "digest": "sha512", 00:16:50.402 "dhgroup": "ffdhe8192" 00:16:50.402 } 00:16:50.402 } 00:16:50.402 ]' 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.402 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.659 12:56:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:MTBjZjE0YWEwZTY0OThiYWNjMDg4NTMxYmQ0ODZkYTO0sctR: --dhchap-ctrl-secret DHHC-1:02:NWJkYTA3YzljNjEwMTkxYjY4YTU4OTU0ZGYzM2I1NWRkYzJhZjk1ZWVkNmMzMjY15eXWXg==: 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.588 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.845 12:56:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.772 00:16:52.772 12:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.772 12:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.772 12:56:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.029 { 00:16:53.029 "cntlid": 141, 00:16:53.029 "qid": 0, 00:16:53.029 "state": "enabled", 00:16:53.029 "thread": "nvmf_tgt_poll_group_000", 00:16:53.029 "listen_address": { 00:16:53.029 "trtype": "TCP", 00:16:53.029 "adrfam": "IPv4", 00:16:53.029 "traddr": "10.0.0.2", 00:16:53.029 "trsvcid": "4420" 00:16:53.029 }, 00:16:53.029 "peer_address": { 00:16:53.029 "trtype": "TCP", 00:16:53.029 "adrfam": "IPv4", 00:16:53.029 "traddr": "10.0.0.1", 00:16:53.029 "trsvcid": "56818" 00:16:53.029 }, 00:16:53.029 "auth": { 00:16:53.029 "state": "completed", 00:16:53.029 "digest": "sha512", 00:16:53.029 "dhgroup": "ffdhe8192" 00:16:53.029 } 00:16:53.029 } 00:16:53.029 ]' 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.029 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.286 12:56:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:MzZhMjE4YmI5YmU5ODkwN2VmMzI0NWI0ZWYyY2ZkNmFiYTY2YjQzM2JiZDI1MjU0GU4HAg==: --dhchap-ctrl-secret DHHC-1:01:MzEzNTUwZWJmN2RmZDgwY2ViNGEwYmViMDk1ZmI5ZTU/6IBz: 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.217 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.475 12:56:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.406 00:16:55.406 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.406 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.406 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.663 { 00:16:55.663 "cntlid": 143, 00:16:55.663 "qid": 0, 00:16:55.663 "state": "enabled", 00:16:55.663 "thread": "nvmf_tgt_poll_group_000", 00:16:55.663 "listen_address": { 00:16:55.663 "trtype": "TCP", 00:16:55.663 "adrfam": "IPv4", 00:16:55.663 "traddr": "10.0.0.2", 00:16:55.663 "trsvcid": "4420" 00:16:55.663 }, 00:16:55.663 "peer_address": { 00:16:55.663 "trtype": "TCP", 00:16:55.663 "adrfam": "IPv4", 00:16:55.663 "traddr": "10.0.0.1", 00:16:55.663 "trsvcid": "34822" 00:16:55.663 }, 00:16:55.663 "auth": { 00:16:55.663 "state": "completed", 00:16:55.663 "digest": "sha512", 00:16:55.663 "dhgroup": "ffdhe8192" 00:16:55.663 } 00:16:55.663 } 00:16:55.663 ]' 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.663 12:56:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.920 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:56.853 12:56:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.111 12:56:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.044 00:16:58.044 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.044 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.044 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.302 { 00:16:58.302 "cntlid": 145, 00:16:58.302 "qid": 0, 00:16:58.302 "state": "enabled", 00:16:58.302 "thread": "nvmf_tgt_poll_group_000", 00:16:58.302 "listen_address": { 00:16:58.302 "trtype": "TCP", 00:16:58.302 "adrfam": "IPv4", 00:16:58.302 "traddr": "10.0.0.2", 00:16:58.302 "trsvcid": "4420" 00:16:58.302 }, 00:16:58.302 "peer_address": { 00:16:58.302 "trtype": "TCP", 00:16:58.302 "adrfam": "IPv4", 00:16:58.302 "traddr": "10.0.0.1", 00:16:58.302 "trsvcid": "34838" 00:16:58.302 }, 00:16:58.302 "auth": { 00:16:58.302 "state": "completed", 00:16:58.302 "digest": "sha512", 00:16:58.302 "dhgroup": "ffdhe8192" 00:16:58.302 } 00:16:58.302 } 00:16:58.302 ]' 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.302 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.559 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.559 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.559 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.816 12:56:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ZmRhYTQwZTJlOGU3ZjZkNzJiZmYwM2I5OTQ1MWUwMjE1NjU1NmVjODEyOTg5MmMykfQHRw==: --dhchap-ctrl-secret DHHC-1:03:Zjk5YmUzOTYzMDkxYjkzNjdiMGY4MDZhZjUwZGYzMzAwNjJlZmQyMjhhODUyYzM4MDdiNzA3MDdiZDIzZDdhNcxiGfQ=: 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:59.746 12:56:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:00.678 request: 00:17:00.678 { 00:17:00.678 "name": "nvme0", 00:17:00.678 "trtype": "tcp", 00:17:00.678 "traddr": "10.0.0.2", 00:17:00.678 "adrfam": "ipv4", 00:17:00.678 "trsvcid": "4420", 00:17:00.678 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:00.678 "prchk_reftag": false, 00:17:00.678 "prchk_guard": false, 00:17:00.678 "hdgst": false, 00:17:00.678 "ddgst": false, 00:17:00.678 "dhchap_key": "key2", 00:17:00.678 "method": "bdev_nvme_attach_controller", 00:17:00.678 "req_id": 1 00:17:00.678 } 00:17:00.678 Got JSON-RPC error response 00:17:00.678 response: 00:17:00.678 { 00:17:00.678 "code": -5, 00:17:00.678 "message": "Input/output error" 00:17:00.678 } 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:00.678 12:56:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:01.243 request: 00:17:01.243 { 00:17:01.243 "name": "nvme0", 00:17:01.243 "trtype": "tcp", 00:17:01.243 "traddr": "10.0.0.2", 00:17:01.243 "adrfam": "ipv4", 00:17:01.243 "trsvcid": "4420", 00:17:01.243 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:01.243 "prchk_reftag": false, 00:17:01.243 "prchk_guard": false, 00:17:01.243 "hdgst": false, 00:17:01.243 "ddgst": false, 00:17:01.243 "dhchap_key": "key1", 00:17:01.243 "dhchap_ctrlr_key": "ckey2", 00:17:01.243 "method": "bdev_nvme_attach_controller", 00:17:01.243 "req_id": 1 00:17:01.243 } 00:17:01.243 Got JSON-RPC error response 00:17:01.243 response: 00:17:01.243 { 00:17:01.243 "code": -5, 00:17:01.243 "message": "Input/output error" 00:17:01.243 } 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.243 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.244 12:56:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.176 request: 00:17:02.176 { 00:17:02.176 "name": "nvme0", 00:17:02.176 "trtype": "tcp", 00:17:02.176 "traddr": "10.0.0.2", 00:17:02.176 "adrfam": "ipv4", 00:17:02.176 "trsvcid": "4420", 00:17:02.176 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:02.176 "prchk_reftag": false, 00:17:02.176 "prchk_guard": false, 00:17:02.176 "hdgst": false, 00:17:02.176 "ddgst": false, 00:17:02.176 "dhchap_key": "key1", 00:17:02.176 "dhchap_ctrlr_key": "ckey1", 00:17:02.176 "method": "bdev_nvme_attach_controller", 00:17:02.176 "req_id": 1 00:17:02.176 } 00:17:02.176 Got JSON-RPC error response 00:17:02.176 response: 00:17:02.176 { 00:17:02.176 "code": -5, 00:17:02.176 "message": "Input/output error" 00:17:02.176 } 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3383264 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3383264 ']' 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3383264 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3383264 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3383264' 00:17:02.176 killing process with pid 3383264 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3383264 00:17:02.176 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3383264 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3405376 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3405376 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3405376 ']' 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.433 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3405376 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3405376 ']' 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.691 12:56:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.948 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.948 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:02.948 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:02.948 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.948 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.206 12:56:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.200 00:17:04.200 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.200 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.201 { 00:17:04.201 "cntlid": 1, 00:17:04.201 "qid": 0, 00:17:04.201 "state": "enabled", 00:17:04.201 "thread": "nvmf_tgt_poll_group_000", 00:17:04.201 "listen_address": { 00:17:04.201 "trtype": "TCP", 00:17:04.201 "adrfam": "IPv4", 00:17:04.201 "traddr": "10.0.0.2", 00:17:04.201 "trsvcid": "4420" 00:17:04.201 }, 00:17:04.201 "peer_address": { 00:17:04.201 "trtype": "TCP", 00:17:04.201 "adrfam": "IPv4", 00:17:04.201 "traddr": "10.0.0.1", 00:17:04.201 "trsvcid": "55548" 00:17:04.201 }, 00:17:04.201 "auth": { 00:17:04.201 "state": "completed", 00:17:04.201 "digest": "sha512", 00:17:04.201 "dhgroup": "ffdhe8192" 00:17:04.201 } 00:17:04.201 } 00:17:04.201 ]' 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.201 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.460 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.718 12:56:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:OWM0YjQ4OTM1MjI1ZjRiNTliZTg5OWEwMGMxZDM4NDI2ZDFkODk1ZThjMzk2ZDYyNTU3NjI4ZDhjN2E4OWQwMUQHTNI=: 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:05.649 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.907 12:56:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.165 request: 00:17:06.165 { 00:17:06.165 "name": "nvme0", 00:17:06.165 "trtype": "tcp", 00:17:06.165 "traddr": "10.0.0.2", 00:17:06.165 "adrfam": "ipv4", 00:17:06.165 "trsvcid": "4420", 00:17:06.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.165 "prchk_reftag": false, 00:17:06.165 "prchk_guard": false, 00:17:06.165 "hdgst": false, 00:17:06.165 "ddgst": false, 00:17:06.165 "dhchap_key": "key3", 00:17:06.165 "method": "bdev_nvme_attach_controller", 00:17:06.165 "req_id": 1 00:17:06.165 } 00:17:06.165 Got JSON-RPC error response 00:17:06.165 response: 00:17:06.165 { 00:17:06.165 "code": -5, 00:17:06.165 "message": "Input/output error" 00:17:06.165 } 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.165 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.423 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.423 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.423 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.423 request: 00:17:06.423 { 00:17:06.423 "name": "nvme0", 00:17:06.423 "trtype": "tcp", 00:17:06.423 "traddr": "10.0.0.2", 00:17:06.423 "adrfam": "ipv4", 00:17:06.423 "trsvcid": "4420", 00:17:06.423 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.423 "prchk_reftag": false, 00:17:06.423 "prchk_guard": false, 00:17:06.423 "hdgst": false, 00:17:06.423 "ddgst": false, 00:17:06.423 "dhchap_key": "key3", 00:17:06.424 "method": "bdev_nvme_attach_controller", 00:17:06.424 "req_id": 1 00:17:06.424 } 00:17:06.424 Got JSON-RPC error response 00:17:06.424 response: 00:17:06.424 { 00:17:06.424 "code": -5, 00:17:06.424 "message": "Input/output error" 00:17:06.424 } 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.424 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.681 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.939 12:56:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.939 request: 00:17:06.939 { 00:17:06.939 "name": "nvme0", 00:17:06.939 "trtype": "tcp", 00:17:06.939 "traddr": "10.0.0.2", 00:17:06.939 "adrfam": "ipv4", 00:17:06.939 "trsvcid": "4420", 00:17:06.939 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:17:06.939 "prchk_reftag": false, 00:17:06.939 "prchk_guard": false, 00:17:06.939 "hdgst": false, 00:17:06.939 "ddgst": false, 00:17:06.939 "dhchap_key": "key0", 00:17:06.939 "dhchap_ctrlr_key": "key1", 00:17:06.939 "method": "bdev_nvme_attach_controller", 00:17:06.939 "req_id": 1 00:17:06.939 } 00:17:06.939 Got JSON-RPC error response 00:17:06.939 response: 00:17:06.939 { 00:17:06.939 "code": -5, 00:17:06.939 "message": "Input/output error" 00:17:06.939 } 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.198 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.456 00:17:07.456 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:07.456 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:07.456 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.728 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.728 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.728 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.988 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3383390 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3383390 ']' 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3383390 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3383390 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3383390' 00:17:07.989 killing process with pid 3383390 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3383390 00:17:07.989 12:56:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3383390 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.246 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.246 rmmod nvme_tcp 00:17:08.246 rmmod nvme_fabrics 00:17:08.504 rmmod nvme_keyring 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3405376 ']' 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3405376 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3405376 ']' 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3405376 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3405376 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3405376' 00:17:08.504 killing process with pid 3405376 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3405376 00:17:08.504 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3405376 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.762 12:56:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.664 12:56:28 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.664 12:56:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Zqw /tmp/spdk.key-sha256.tWT /tmp/spdk.key-sha384.A0I /tmp/spdk.key-sha512.jEg /tmp/spdk.key-sha512.sra /tmp/spdk.key-sha384.8NW /tmp/spdk.key-sha256.Cot '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:10.664 00:17:10.664 real 3m3.121s 00:17:10.664 user 7m8.279s 00:17:10.664 sys 0m25.479s 00:17:10.664 12:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.664 12:56:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.664 ************************************ 00:17:10.664 END TEST nvmf_auth_target 00:17:10.664 ************************************ 00:17:10.664 12:56:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:10.664 12:56:28 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:17:10.664 12:56:28 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:10.664 12:56:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:10.664 12:56:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.664 12:56:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:10.921 ************************************ 00:17:10.921 START TEST nvmf_bdevio_no_huge 00:17:10.921 ************************************ 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:10.921 * Looking for test storage... 00:17:10.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.921 12:56:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:12.819 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:12.819 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:12.819 Found net devices under 0000:84:00.0: cvl_0_0 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:12.819 Found net devices under 0000:84:00.1: cvl_0_1 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.819 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.820 12:56:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:17:13.077 00:17:13.077 --- 10.0.0.2 ping statistics --- 00:17:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.077 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:17:13.077 00:17:13.077 --- 10.0.0.1 ping statistics --- 00:17:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.077 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3408149 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3408149 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3408149 ']' 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.077 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.077 [2024-07-15 12:56:31.177055] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:13.077 [2024-07-15 12:56:31.177142] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:13.077 [2024-07-15 12:56:31.246879] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:13.335 [2024-07-15 12:56:31.345391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.335 [2024-07-15 12:56:31.345448] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.335 [2024-07-15 12:56:31.345476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.335 [2024-07-15 12:56:31.345487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.335 [2024-07-15 12:56:31.345496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.335 [2024-07-15 12:56:31.345593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:13.335 [2024-07-15 12:56:31.345652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:13.335 [2024-07-15 12:56:31.345728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:13.335 [2024-07-15 12:56:31.345731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 [2024-07-15 12:56:31.472684] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 Malloc0 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.335 [2024-07-15 12:56:31.510888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:13.335 { 00:17:13.335 "params": { 00:17:13.335 "name": "Nvme$subsystem", 00:17:13.335 "trtype": "$TEST_TRANSPORT", 00:17:13.335 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:13.335 "adrfam": "ipv4", 00:17:13.335 "trsvcid": "$NVMF_PORT", 00:17:13.335 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:13.335 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:13.335 "hdgst": ${hdgst:-false}, 00:17:13.335 "ddgst": ${ddgst:-false} 00:17:13.335 }, 00:17:13.335 "method": "bdev_nvme_attach_controller" 00:17:13.335 } 00:17:13.335 EOF 00:17:13.335 )") 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:13.335 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:13.336 12:56:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:13.336 "params": { 00:17:13.336 "name": "Nvme1", 00:17:13.336 "trtype": "tcp", 00:17:13.336 "traddr": "10.0.0.2", 00:17:13.336 "adrfam": "ipv4", 00:17:13.336 "trsvcid": "4420", 00:17:13.336 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:13.336 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:13.336 "hdgst": false, 00:17:13.336 "ddgst": false 00:17:13.336 }, 00:17:13.336 "method": "bdev_nvme_attach_controller" 00:17:13.336 }' 00:17:13.594 [2024-07-15 12:56:31.558935] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:13.594 [2024-07-15 12:56:31.559011] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3408175 ] 00:17:13.594 [2024-07-15 12:56:31.622648] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:13.594 [2024-07-15 12:56:31.737919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.594 [2024-07-15 12:56:31.737975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.594 [2024-07-15 12:56:31.737978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.853 I/O targets: 00:17:13.853 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:13.853 00:17:13.853 00:17:13.853 CUnit - A unit testing framework for C - Version 2.1-3 00:17:13.853 http://cunit.sourceforge.net/ 00:17:13.853 00:17:13.853 00:17:13.853 Suite: bdevio tests on: Nvme1n1 00:17:13.853 Test: blockdev write read block ...passed 00:17:13.853 Test: blockdev write zeroes read block ...passed 00:17:13.853 Test: blockdev write zeroes read no split ...passed 00:17:14.110 Test: blockdev write zeroes read split ...passed 00:17:14.110 Test: blockdev write zeroes read split partial ...passed 00:17:14.110 Test: blockdev reset ...[2024-07-15 12:56:32.142285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:14.110 [2024-07-15 12:56:32.142407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5d670 (9): Bad file descriptor 00:17:14.110 [2024-07-15 12:56:32.245412] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:14.110 passed 00:17:14.110 Test: blockdev write read 8 blocks ...passed 00:17:14.110 Test: blockdev write read size > 128k ...passed 00:17:14.110 Test: blockdev write read invalid size ...passed 00:17:14.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:14.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:14.110 Test: blockdev write read max offset ...passed 00:17:14.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:14.369 Test: blockdev writev readv 8 blocks ...passed 00:17:14.369 Test: blockdev writev readv 30 x 1block ...passed 00:17:14.369 Test: blockdev writev readv block ...passed 00:17:14.369 Test: blockdev writev readv size > 128k ...passed 00:17:14.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:14.369 Test: blockdev comparev and writev ...[2024-07-15 12:56:32.417468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.417514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.417538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.417558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.417980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.418027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.418390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.418435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.418863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.418907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:14.369 [2024-07-15 12:56:32.418923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:14.369 passed 00:17:14.369 Test: blockdev nvme passthru rw ...passed 00:17:14.369 Test: blockdev nvme passthru vendor specific ...[2024-07-15 12:56:32.501069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:14.369 [2024-07-15 12:56:32.501096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:14.369 [2024-07-15 12:56:32.501245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:14.370 [2024-07-15 12:56:32.501267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:14.370 [2024-07-15 12:56:32.501418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:14.370 [2024-07-15 12:56:32.501441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:14.370 [2024-07-15 12:56:32.501588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:14.370 [2024-07-15 12:56:32.501610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:14.370 passed 00:17:14.370 Test: blockdev nvme admin passthru ...passed 00:17:14.370 Test: blockdev copy ...passed 00:17:14.370 00:17:14.370 Run Summary: Type Total Ran Passed Failed Inactive 00:17:14.370 suites 1 1 n/a 0 0 00:17:14.370 tests 23 23 23 0 0 00:17:14.370 asserts 152 152 152 0 n/a 00:17:14.370 00:17:14.370 Elapsed time = 1.234 seconds 00:17:14.935 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:14.935 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.935 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.935 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:14.936 rmmod nvme_tcp 00:17:14.936 rmmod nvme_fabrics 00:17:14.936 rmmod nvme_keyring 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3408149 ']' 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3408149 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3408149 ']' 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3408149 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.936 12:56:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3408149 00:17:14.936 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:14.936 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:14.936 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3408149' 00:17:14.936 killing process with pid 3408149 00:17:14.936 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3408149 00:17:14.936 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3408149 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.509 12:56:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.414 12:56:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:17.414 00:17:17.414 real 0m6.584s 00:17:17.414 user 0m10.920s 00:17:17.414 sys 0m2.499s 00:17:17.414 12:56:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:17.414 12:56:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:17.415 ************************************ 00:17:17.415 END TEST nvmf_bdevio_no_huge 00:17:17.415 ************************************ 00:17:17.415 12:56:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:17.415 12:56:35 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:17.415 12:56:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:17.415 12:56:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.415 12:56:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:17.415 ************************************ 00:17:17.415 START TEST nvmf_tls 00:17:17.415 ************************************ 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:17.415 * Looking for test storage... 00:17:17.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:17.415 12:56:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:17:19.947 Found 0000:84:00.0 (0x8086 - 0x159b) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:17:19.947 Found 0000:84:00.1 (0x8086 - 0x159b) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.947 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:17:19.948 Found net devices under 0000:84:00.0: cvl_0_0 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:17:19.948 Found net devices under 0000:84:00.1: cvl_0_1 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:19.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:17:19.948 00:17:19.948 --- 10.0.0.2 ping statistics --- 00:17:19.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.948 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:19.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:17:19.948 00:17:19.948 --- 10.0.0.1 ping statistics --- 00:17:19.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.948 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3410386 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3410386 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3410386 ']' 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:19.948 12:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 [2024-07-15 12:56:37.848415] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:19.948 [2024-07-15 12:56:37.848497] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:19.948 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.948 [2024-07-15 12:56:37.912036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.948 [2024-07-15 12:56:38.011576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.948 [2024-07-15 12:56:38.011634] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.948 [2024-07-15 12:56:38.011661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.948 [2024-07-15 12:56:38.011673] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.948 [2024-07-15 12:56:38.011682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.948 [2024-07-15 12:56:38.011716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:19.948 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:20.206 true 00:17:20.206 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.206 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:20.464 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:20.464 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:20.464 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:20.722 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.722 12:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:20.981 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:20.981 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:20.981 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:21.251 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:21.251 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.509 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:21.509 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:21.509 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.509 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:21.767 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:21.767 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:21.767 12:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:22.025 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:22.025 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:22.283 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:22.283 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:22.283 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:22.541 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:22.541 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Eq79br0DLs 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.xaH23tfEhu 00:17:22.798 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:22.799 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:22.799 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Eq79br0DLs 00:17:22.799 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xaH23tfEhu 00:17:22.799 12:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:23.056 12:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:23.622 12:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Eq79br0DLs 00:17:23.622 12:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Eq79br0DLs 00:17:23.622 12:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.622 [2024-07-15 12:56:41.776090] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.622 12:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.188 12:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.188 [2024-07-15 12:56:42.341633] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.188 [2024-07-15 12:56:42.341886] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.188 12:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.446 malloc0 00:17:24.446 12:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.704 12:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Eq79br0DLs 00:17:24.962 [2024-07-15 12:56:43.087166] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:24.962 12:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Eq79br0DLs 00:17:24.962 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.185 Initializing NVMe Controllers 00:17:37.185 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.185 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:37.185 Initialization complete. Launching workers. 00:17:37.185 ======================================================== 00:17:37.185 Latency(us) 00:17:37.185 Device Information : IOPS MiB/s Average min max 00:17:37.185 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8714.94 34.04 7345.74 1054.45 11426.26 00:17:37.185 ======================================================== 00:17:37.185 Total : 8714.94 34.04 7345.74 1054.45 11426.26 00:17:37.185 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eq79br0DLs 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Eq79br0DLs' 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3412161 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3412161 /var/tmp/bdevperf.sock 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3412161 ']' 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.185 [2024-07-15 12:56:53.241206] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:37.185 [2024-07-15 12:56:53.241283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412161 ] 00:17:37.185 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.185 [2024-07-15 12:56:53.299220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.185 [2024-07-15 12:56:53.408693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Eq79br0DLs 00:17:37.185 [2024-07-15 12:56:53.798298] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.185 [2024-07-15 12:56:53.798426] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.185 TLSTESTn1 00:17:37.185 12:56:53 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:37.185 Running I/O for 10 seconds... 00:17:47.149 00:17:47.149 Latency(us) 00:17:47.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.149 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.149 Verification LBA range: start 0x0 length 0x2000 00:17:47.149 TLSTESTn1 : 10.02 3623.68 14.16 0.00 0.00 35262.86 5485.61 46020.84 00:17:47.149 =================================================================================================================== 00:17:47.149 Total : 3623.68 14.16 0.00 0.00 35262.86 5485.61 46020.84 00:17:47.149 0 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3412161 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3412161 ']' 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3412161 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3412161 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.149 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3412161' 00:17:47.149 killing process with pid 3412161 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3412161 00:17:47.150 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.150 00:17:47.150 Latency(us) 00:17:47.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.150 =================================================================================================================== 00:17:47.150 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.150 [2024-07-15 12:57:04.079729] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3412161 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xaH23tfEhu 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xaH23tfEhu 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xaH23tfEhu 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xaH23tfEhu' 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3413584 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3413584 /var/tmp/bdevperf.sock 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3413584 ']' 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.150 [2024-07-15 12:57:04.391403] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:47.150 [2024-07-15 12:57:04.391480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413584 ] 00:17:47.150 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.150 [2024-07-15 12:57:04.451756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.150 [2024-07-15 12:57:04.563229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xaH23tfEhu 00:17:47.150 [2024-07-15 12:57:04.949470] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.150 [2024-07-15 12:57:04.949619] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.150 [2024-07-15 12:57:04.954938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.150 [2024-07-15 12:57:04.955460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24286d0 (107): Transport endpoint is not connected 00:17:47.150 [2024-07-15 12:57:04.956447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24286d0 (9): Bad file descriptor 00:17:47.150 [2024-07-15 12:57:04.957445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.150 [2024-07-15 12:57:04.957469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.150 [2024-07-15 12:57:04.957506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.150 request: 00:17:47.150 { 00:17:47.150 "name": "TLSTEST", 00:17:47.150 "trtype": "tcp", 00:17:47.150 "traddr": "10.0.0.2", 00:17:47.150 "adrfam": "ipv4", 00:17:47.150 "trsvcid": "4420", 00:17:47.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.150 "prchk_reftag": false, 00:17:47.150 "prchk_guard": false, 00:17:47.150 "hdgst": false, 00:17:47.150 "ddgst": false, 00:17:47.150 "psk": "/tmp/tmp.xaH23tfEhu", 00:17:47.150 "method": "bdev_nvme_attach_controller", 00:17:47.150 "req_id": 1 00:17:47.150 } 00:17:47.150 Got JSON-RPC error response 00:17:47.150 response: 00:17:47.150 { 00:17:47.150 "code": -5, 00:17:47.150 "message": "Input/output error" 00:17:47.150 } 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3413584 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3413584 ']' 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3413584 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.150 12:57:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413584 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413584' 00:17:47.150 killing process with pid 3413584 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3413584 00:17:47.150 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.150 00:17:47.150 Latency(us) 00:17:47.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.150 =================================================================================================================== 00:17:47.150 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.150 [2024-07-15 12:57:05.007266] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3413584 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eq79br0DLs 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eq79br0DLs 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Eq79br0DLs 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Eq79br0DLs' 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3413723 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3413723 /var/tmp/bdevperf.sock 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3413723 ']' 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.150 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.150 [2024-07-15 12:57:05.305154] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:47.150 [2024-07-15 12:57:05.305232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413723 ] 00:17:47.150 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.409 [2024-07-15 12:57:05.365759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.409 [2024-07-15 12:57:05.474785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.409 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.409 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.409 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Eq79br0DLs 00:17:47.668 [2024-07-15 12:57:05.828733] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.668 [2024-07-15 12:57:05.828869] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.668 [2024-07-15 12:57:05.835731] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:47.668 [2024-07-15 12:57:05.835774] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:47.668 [2024-07-15 12:57:05.835818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.668 [2024-07-15 12:57:05.836603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25256d0 (107): Transport endpoint is not connected 00:17:47.668 [2024-07-15 12:57:05.837592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25256d0 (9): Bad file descriptor 00:17:47.668 [2024-07-15 12:57:05.838593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.668 [2024-07-15 12:57:05.838622] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.668 [2024-07-15 12:57:05.838646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.668 request: 00:17:47.668 { 00:17:47.668 "name": "TLSTEST", 00:17:47.668 "trtype": "tcp", 00:17:47.668 "traddr": "10.0.0.2", 00:17:47.668 "adrfam": "ipv4", 00:17:47.668 "trsvcid": "4420", 00:17:47.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.668 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:47.668 "prchk_reftag": false, 00:17:47.668 "prchk_guard": false, 00:17:47.668 "hdgst": false, 00:17:47.668 "ddgst": false, 00:17:47.668 "psk": "/tmp/tmp.Eq79br0DLs", 00:17:47.668 "method": "bdev_nvme_attach_controller", 00:17:47.668 "req_id": 1 00:17:47.668 } 00:17:47.668 Got JSON-RPC error response 00:17:47.668 response: 00:17:47.668 { 00:17:47.668 "code": -5, 00:17:47.668 "message": "Input/output error" 00:17:47.668 } 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3413723 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3413723 ']' 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3413723 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.668 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413723 00:17:47.926 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.926 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.926 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413723' 00:17:47.926 killing process with pid 3413723 00:17:47.926 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3413723 00:17:47.926 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.926 00:17:47.926 Latency(us) 00:17:47.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.926 =================================================================================================================== 00:17:47.926 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.926 [2024-07-15 12:57:05.890046] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.926 12:57:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3413723 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eq79br0DLs 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eq79br0DLs 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Eq79br0DLs 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Eq79br0DLs' 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3413819 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3413819 /var/tmp/bdevperf.sock 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3413819 ']' 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.185 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.185 [2024-07-15 12:57:06.191085] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:48.185 [2024-07-15 12:57:06.191165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413819 ] 00:17:48.185 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.185 [2024-07-15 12:57:06.250237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.185 [2024-07-15 12:57:06.354571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.443 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.443 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.443 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Eq79br0DLs 00:17:48.700 [2024-07-15 12:57:06.737235] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.700 [2024-07-15 12:57:06.737365] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:48.700 [2024-07-15 12:57:06.748033] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:48.700 [2024-07-15 12:57:06.748079] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:48.700 [2024-07-15 12:57:06.748119] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:48.700 [2024-07-15 12:57:06.748219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21016d0 (107): Transport endpoint is not connected 00:17:48.700 [2024-07-15 12:57:06.749166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21016d0 (9): Bad file descriptor 00:17:48.700 [2024-07-15 12:57:06.750165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:48.700 [2024-07-15 12:57:06.750188] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:48.700 [2024-07-15 12:57:06.750224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:48.700 request: 00:17:48.700 { 00:17:48.700 "name": "TLSTEST", 00:17:48.700 "trtype": "tcp", 00:17:48.700 "traddr": "10.0.0.2", 00:17:48.700 "adrfam": "ipv4", 00:17:48.700 "trsvcid": "4420", 00:17:48.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:48.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:48.700 "prchk_reftag": false, 00:17:48.700 "prchk_guard": false, 00:17:48.700 "hdgst": false, 00:17:48.700 "ddgst": false, 00:17:48.700 "psk": "/tmp/tmp.Eq79br0DLs", 00:17:48.700 "method": "bdev_nvme_attach_controller", 00:17:48.700 "req_id": 1 00:17:48.700 } 00:17:48.700 Got JSON-RPC error response 00:17:48.700 response: 00:17:48.700 { 00:17:48.700 "code": -5, 00:17:48.700 "message": "Input/output error" 00:17:48.700 } 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3413819 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3413819 ']' 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3413819 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413819 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413819' 00:17:48.700 killing process with pid 3413819 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3413819 00:17:48.700 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.700 00:17:48.700 Latency(us) 00:17:48.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.700 =================================================================================================================== 00:17:48.700 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.700 [2024-07-15 12:57:06.794431] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:48.700 12:57:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3413819 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3413884 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3413884 /var/tmp/bdevperf.sock 00:17:48.957 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3413884 ']' 00:17:48.958 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.958 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.958 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.958 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.958 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.958 [2024-07-15 12:57:07.072627] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:48.958 [2024-07-15 12:57:07.072710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3413884 ] 00:17:48.958 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.958 [2024-07-15 12:57:07.132171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.215 [2024-07-15 12:57:07.244551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.215 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.215 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.215 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:49.473 [2024-07-15 12:57:07.600319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:49.473 [2024-07-15 12:57:07.601835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd7e10 (9): Bad file descriptor 00:17:49.473 [2024-07-15 12:57:07.602832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:49.473 [2024-07-15 12:57:07.602856] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:49.473 [2024-07-15 12:57:07.602883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:49.473 request: 00:17:49.473 { 00:17:49.473 "name": "TLSTEST", 00:17:49.473 "trtype": "tcp", 00:17:49.473 "traddr": "10.0.0.2", 00:17:49.473 "adrfam": "ipv4", 00:17:49.473 "trsvcid": "4420", 00:17:49.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.473 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.473 "prchk_reftag": false, 00:17:49.473 "prchk_guard": false, 00:17:49.473 "hdgst": false, 00:17:49.473 "ddgst": false, 00:17:49.473 "method": "bdev_nvme_attach_controller", 00:17:49.473 "req_id": 1 00:17:49.473 } 00:17:49.473 Got JSON-RPC error response 00:17:49.473 response: 00:17:49.473 { 00:17:49.473 "code": -5, 00:17:49.473 "message": "Input/output error" 00:17:49.473 } 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3413884 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3413884 ']' 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3413884 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3413884 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3413884' 00:17:49.473 killing process with pid 3413884 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3413884 00:17:49.473 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.473 00:17:49.473 Latency(us) 00:17:49.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.473 =================================================================================================================== 00:17:49.473 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.473 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3413884 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 3410386 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3410386 ']' 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3410386 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3410386 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3410386' 00:17:49.730 killing process with pid 3410386 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3410386 00:17:49.730 [2024-07-15 12:57:07.928919] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:49.730 12:57:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3410386 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.wJH5AmeQHC 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.wJH5AmeQHC 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3414392 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3414392 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3414392 ']' 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.295 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.295 [2024-07-15 12:57:08.301036] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:50.295 [2024-07-15 12:57:08.301153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.295 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.295 [2024-07-15 12:57:08.367416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.295 [2024-07-15 12:57:08.473733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.295 [2024-07-15 12:57:08.473797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.295 [2024-07-15 12:57:08.473811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.295 [2024-07-15 12:57:08.473823] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.295 [2024-07-15 12:57:08.473833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.295 [2024-07-15 12:57:08.473859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wJH5AmeQHC 00:17:50.553 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:50.810 [2024-07-15 12:57:08.830264] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.810 12:57:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.067 12:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.325 [2024-07-15 12:57:09.331565] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.325 [2024-07-15 12:57:09.331816] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.325 12:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:51.583 malloc0 00:17:51.583 12:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:51.841 12:57:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:17:52.098 [2024-07-15 12:57:10.083867] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJH5AmeQHC 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wJH5AmeQHC' 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3414819 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3414819 /var/tmp/bdevperf.sock 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3414819 ']' 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.098 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.098 [2024-07-15 12:57:10.150349] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:17:52.098 [2024-07-15 12:57:10.150419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414819 ] 00:17:52.098 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.098 [2024-07-15 12:57:10.209144] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.355 [2024-07-15 12:57:10.320119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.355 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.355 12:57:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.355 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:17:52.612 [2024-07-15 12:57:10.676328] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.612 [2024-07-15 12:57:10.676464] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.612 TLSTESTn1 00:17:52.612 12:57:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.869 Running I/O for 10 seconds... 00:18:02.860 00:18:02.860 Latency(us) 00:18:02.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.860 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.860 Verification LBA range: start 0x0 length 0x2000 00:18:02.860 TLSTESTn1 : 10.02 3690.30 14.42 0.00 0.00 34626.31 5388.52 39807.05 00:18:02.860 =================================================================================================================== 00:18:02.860 Total : 3690.30 14.42 0.00 0.00 34626.31 5388.52 39807.05 00:18:02.860 0 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 3414819 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3414819 ']' 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3414819 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414819 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414819' 00:18:02.860 killing process with pid 3414819 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3414819 00:18:02.860 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.860 00:18:02.860 Latency(us) 00:18:02.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.860 =================================================================================================================== 00:18:02.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.860 [2024-07-15 12:57:20.959521] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.860 12:57:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3414819 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.wJH5AmeQHC 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJH5AmeQHC 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJH5AmeQHC 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wJH5AmeQHC 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wJH5AmeQHC' 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3416136 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3416136 /var/tmp/bdevperf.sock 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3416136 ']' 00:18:03.118 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.119 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.119 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.119 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.119 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.119 [2024-07-15 12:57:21.280143] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:03.119 [2024-07-15 12:57:21.280220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416136 ] 00:18:03.119 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.377 [2024-07-15 12:57:21.340943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.377 [2024-07-15 12:57:21.443683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.377 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.377 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.377 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:18:03.634 [2024-07-15 12:57:21.809704] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.635 [2024-07-15 12:57:21.809804] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:03.635 [2024-07-15 12:57:21.809827] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.wJH5AmeQHC 00:18:03.635 request: 00:18:03.635 { 00:18:03.635 "name": "TLSTEST", 00:18:03.635 "trtype": "tcp", 00:18:03.635 "traddr": "10.0.0.2", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "4420", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.635 "prchk_reftag": false, 00:18:03.635 "prchk_guard": false, 00:18:03.635 "hdgst": false, 00:18:03.635 "ddgst": false, 00:18:03.635 "psk": "/tmp/tmp.wJH5AmeQHC", 00:18:03.635 "method": "bdev_nvme_attach_controller", 00:18:03.635 "req_id": 1 00:18:03.635 } 00:18:03.635 Got JSON-RPC error response 00:18:03.635 response: 00:18:03.635 { 00:18:03.635 "code": -1, 00:18:03.635 "message": "Operation not permitted" 00:18:03.635 } 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 3416136 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3416136 ']' 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3416136 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.635 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416136 00:18:03.893 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:03.893 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:03.893 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416136' 00:18:03.893 killing process with pid 3416136 00:18:03.893 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3416136 00:18:03.893 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.893 00:18:03.893 Latency(us) 00:18:03.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.893 =================================================================================================================== 00:18:03.893 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.893 12:57:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3416136 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 3414392 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3414392 ']' 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3414392 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414392 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414392' 00:18:04.151 killing process with pid 3414392 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3414392 00:18:04.151 [2024-07-15 12:57:22.146146] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.151 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3414392 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3416281 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3416281 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3416281 ']' 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.409 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.410 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.410 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.410 [2024-07-15 12:57:22.486368] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:04.410 [2024-07-15 12:57:22.486455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.410 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.410 [2024-07-15 12:57:22.549130] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.668 [2024-07-15 12:57:22.649477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.668 [2024-07-15 12:57:22.649532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.668 [2024-07-15 12:57:22.649562] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.668 [2024-07-15 12:57:22.649573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.668 [2024-07-15 12:57:22.649583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.668 [2024-07-15 12:57:22.649608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wJH5AmeQHC 00:18:04.668 12:57:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.925 [2024-07-15 12:57:23.067413] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.925 12:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.182 12:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.439 [2024-07-15 12:57:23.588796] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.439 [2024-07-15 12:57:23.589032] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.439 12:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.697 malloc0 00:18:05.697 12:57:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.955 12:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:18:06.214 [2024-07-15 12:57:24.333309] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:06.214 [2024-07-15 12:57:24.333348] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:06.214 [2024-07-15 12:57:24.333394] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:06.214 request: 00:18:06.214 { 00:18:06.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.214 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.214 "psk": "/tmp/tmp.wJH5AmeQHC", 00:18:06.214 "method": "nvmf_subsystem_add_host", 00:18:06.214 "req_id": 1 00:18:06.214 } 00:18:06.214 Got JSON-RPC error response 00:18:06.214 response: 00:18:06.214 { 00:18:06.214 "code": -32603, 00:18:06.214 "message": "Internal error" 00:18:06.214 } 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 3416281 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3416281 ']' 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3416281 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416281 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416281' 00:18:06.214 killing process with pid 3416281 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3416281 00:18:06.214 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3416281 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.wJH5AmeQHC 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3416574 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3416574 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3416574 ']' 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.473 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.733 [2024-07-15 12:57:24.711800] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:06.733 [2024-07-15 12:57:24.711886] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.733 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.733 [2024-07-15 12:57:24.773195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.733 [2024-07-15 12:57:24.870746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.733 [2024-07-15 12:57:24.870806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.733 [2024-07-15 12:57:24.870836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.733 [2024-07-15 12:57:24.870848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.733 [2024-07-15 12:57:24.870858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.733 [2024-07-15 12:57:24.870893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.991 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.991 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.991 12:57:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.991 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.991 12:57:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.991 12:57:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.991 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:18:06.991 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wJH5AmeQHC 00:18:06.991 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.249 [2024-07-15 12:57:25.237841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.249 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.510 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:07.767 [2024-07-15 12:57:25.739178] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:07.767 [2024-07-15 12:57:25.739408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.767 12:57:25 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.024 malloc0 00:18:08.024 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.282 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:18:08.540 [2024-07-15 12:57:26.623688] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3416858 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3416858 /var/tmp/bdevperf.sock 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3416858 ']' 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.540 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.540 [2024-07-15 12:57:26.688624] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:08.540 [2024-07-15 12:57:26.688694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3416858 ] 00:18:08.540 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.797 [2024-07-15 12:57:26.748084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.797 [2024-07-15 12:57:26.853862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.797 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:08.797 12:57:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:08.797 12:57:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:18:09.053 [2024-07-15 12:57:27.174675] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.053 [2024-07-15 12:57:27.174853] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.053 TLSTESTn1 00:18:09.310 12:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:09.566 12:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:09.566 "subsystems": [ 00:18:09.566 { 00:18:09.566 "subsystem": "keyring", 00:18:09.566 "config": [] 00:18:09.566 }, 00:18:09.566 { 00:18:09.566 "subsystem": "iobuf", 00:18:09.566 "config": [ 00:18:09.566 { 00:18:09.566 "method": "iobuf_set_options", 00:18:09.566 "params": { 00:18:09.567 "small_pool_count": 8192, 00:18:09.567 "large_pool_count": 1024, 00:18:09.567 "small_bufsize": 8192, 00:18:09.567 "large_bufsize": 135168 00:18:09.567 } 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "sock", 00:18:09.567 "config": [ 00:18:09.567 { 00:18:09.567 "method": "sock_set_default_impl", 00:18:09.567 "params": { 00:18:09.567 "impl_name": "posix" 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "sock_impl_set_options", 00:18:09.567 "params": { 00:18:09.567 "impl_name": "ssl", 00:18:09.567 "recv_buf_size": 4096, 00:18:09.567 "send_buf_size": 4096, 00:18:09.567 "enable_recv_pipe": true, 00:18:09.567 "enable_quickack": false, 00:18:09.567 "enable_placement_id": 0, 00:18:09.567 "enable_zerocopy_send_server": true, 00:18:09.567 "enable_zerocopy_send_client": false, 00:18:09.567 "zerocopy_threshold": 0, 00:18:09.567 "tls_version": 0, 00:18:09.567 "enable_ktls": false 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "sock_impl_set_options", 00:18:09.567 "params": { 00:18:09.567 "impl_name": "posix", 00:18:09.567 "recv_buf_size": 2097152, 00:18:09.567 "send_buf_size": 2097152, 00:18:09.567 "enable_recv_pipe": true, 00:18:09.567 "enable_quickack": false, 00:18:09.567 "enable_placement_id": 0, 00:18:09.567 "enable_zerocopy_send_server": true, 00:18:09.567 "enable_zerocopy_send_client": false, 00:18:09.567 "zerocopy_threshold": 0, 00:18:09.567 "tls_version": 0, 00:18:09.567 "enable_ktls": false 00:18:09.567 } 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "vmd", 00:18:09.567 "config": [] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "accel", 00:18:09.567 "config": [ 00:18:09.567 { 00:18:09.567 "method": "accel_set_options", 00:18:09.567 "params": { 00:18:09.567 "small_cache_size": 128, 00:18:09.567 "large_cache_size": 16, 00:18:09.567 "task_count": 2048, 00:18:09.567 "sequence_count": 2048, 00:18:09.567 "buf_count": 2048 00:18:09.567 } 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "bdev", 00:18:09.567 "config": [ 00:18:09.567 { 00:18:09.567 "method": "bdev_set_options", 00:18:09.567 "params": { 00:18:09.567 "bdev_io_pool_size": 65535, 00:18:09.567 "bdev_io_cache_size": 256, 00:18:09.567 "bdev_auto_examine": true, 00:18:09.567 "iobuf_small_cache_size": 128, 00:18:09.567 "iobuf_large_cache_size": 16 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_raid_set_options", 00:18:09.567 "params": { 00:18:09.567 "process_window_size_kb": 1024 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_iscsi_set_options", 00:18:09.567 "params": { 00:18:09.567 "timeout_sec": 30 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_nvme_set_options", 00:18:09.567 "params": { 00:18:09.567 "action_on_timeout": "none", 00:18:09.567 "timeout_us": 0, 00:18:09.567 "timeout_admin_us": 0, 00:18:09.567 "keep_alive_timeout_ms": 10000, 00:18:09.567 "arbitration_burst": 0, 00:18:09.567 "low_priority_weight": 0, 00:18:09.567 "medium_priority_weight": 0, 00:18:09.567 "high_priority_weight": 0, 00:18:09.567 "nvme_adminq_poll_period_us": 10000, 00:18:09.567 "nvme_ioq_poll_period_us": 0, 00:18:09.567 "io_queue_requests": 0, 00:18:09.567 "delay_cmd_submit": true, 00:18:09.567 "transport_retry_count": 4, 00:18:09.567 "bdev_retry_count": 3, 00:18:09.567 "transport_ack_timeout": 0, 00:18:09.567 "ctrlr_loss_timeout_sec": 0, 00:18:09.567 "reconnect_delay_sec": 0, 00:18:09.567 "fast_io_fail_timeout_sec": 0, 00:18:09.567 "disable_auto_failback": false, 00:18:09.567 "generate_uuids": false, 00:18:09.567 "transport_tos": 0, 00:18:09.567 "nvme_error_stat": false, 00:18:09.567 "rdma_srq_size": 0, 00:18:09.567 "io_path_stat": false, 00:18:09.567 "allow_accel_sequence": false, 00:18:09.567 "rdma_max_cq_size": 0, 00:18:09.567 "rdma_cm_event_timeout_ms": 0, 00:18:09.567 "dhchap_digests": [ 00:18:09.567 "sha256", 00:18:09.567 "sha384", 00:18:09.567 "sha512" 00:18:09.567 ], 00:18:09.567 "dhchap_dhgroups": [ 00:18:09.567 "null", 00:18:09.567 "ffdhe2048", 00:18:09.567 "ffdhe3072", 00:18:09.567 "ffdhe4096", 00:18:09.567 "ffdhe6144", 00:18:09.567 "ffdhe8192" 00:18:09.567 ] 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_nvme_set_hotplug", 00:18:09.567 "params": { 00:18:09.567 "period_us": 100000, 00:18:09.567 "enable": false 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_malloc_create", 00:18:09.567 "params": { 00:18:09.567 "name": "malloc0", 00:18:09.567 "num_blocks": 8192, 00:18:09.567 "block_size": 4096, 00:18:09.567 "physical_block_size": 4096, 00:18:09.567 "uuid": "0014b6be-0aa9-4be8-b23e-7cb5ba7be897", 00:18:09.567 "optimal_io_boundary": 0 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "bdev_wait_for_examine" 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "nbd", 00:18:09.567 "config": [] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "scheduler", 00:18:09.567 "config": [ 00:18:09.567 { 00:18:09.567 "method": "framework_set_scheduler", 00:18:09.567 "params": { 00:18:09.567 "name": "static" 00:18:09.567 } 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "subsystem": "nvmf", 00:18:09.567 "config": [ 00:18:09.567 { 00:18:09.567 "method": "nvmf_set_config", 00:18:09.567 "params": { 00:18:09.567 "discovery_filter": "match_any", 00:18:09.567 "admin_cmd_passthru": { 00:18:09.567 "identify_ctrlr": false 00:18:09.567 } 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_set_max_subsystems", 00:18:09.567 "params": { 00:18:09.567 "max_subsystems": 1024 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_set_crdt", 00:18:09.567 "params": { 00:18:09.567 "crdt1": 0, 00:18:09.567 "crdt2": 0, 00:18:09.567 "crdt3": 0 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_create_transport", 00:18:09.567 "params": { 00:18:09.567 "trtype": "TCP", 00:18:09.567 "max_queue_depth": 128, 00:18:09.567 "max_io_qpairs_per_ctrlr": 127, 00:18:09.567 "in_capsule_data_size": 4096, 00:18:09.567 "max_io_size": 131072, 00:18:09.567 "io_unit_size": 131072, 00:18:09.567 "max_aq_depth": 128, 00:18:09.567 "num_shared_buffers": 511, 00:18:09.567 "buf_cache_size": 4294967295, 00:18:09.567 "dif_insert_or_strip": false, 00:18:09.567 "zcopy": false, 00:18:09.567 "c2h_success": false, 00:18:09.567 "sock_priority": 0, 00:18:09.567 "abort_timeout_sec": 1, 00:18:09.567 "ack_timeout": 0, 00:18:09.567 "data_wr_pool_size": 0 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_create_subsystem", 00:18:09.567 "params": { 00:18:09.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.567 "allow_any_host": false, 00:18:09.567 "serial_number": "SPDK00000000000001", 00:18:09.567 "model_number": "SPDK bdev Controller", 00:18:09.567 "max_namespaces": 10, 00:18:09.567 "min_cntlid": 1, 00:18:09.567 "max_cntlid": 65519, 00:18:09.567 "ana_reporting": false 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_subsystem_add_host", 00:18:09.567 "params": { 00:18:09.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.567 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.567 "psk": "/tmp/tmp.wJH5AmeQHC" 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_subsystem_add_ns", 00:18:09.567 "params": { 00:18:09.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.567 "namespace": { 00:18:09.567 "nsid": 1, 00:18:09.567 "bdev_name": "malloc0", 00:18:09.567 "nguid": "0014B6BE0AA94BE8B23E7CB5BA7BE897", 00:18:09.567 "uuid": "0014b6be-0aa9-4be8-b23e-7cb5ba7be897", 00:18:09.567 "no_auto_visible": false 00:18:09.567 } 00:18:09.567 } 00:18:09.567 }, 00:18:09.567 { 00:18:09.567 "method": "nvmf_subsystem_add_listener", 00:18:09.567 "params": { 00:18:09.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.567 "listen_address": { 00:18:09.567 "trtype": "TCP", 00:18:09.567 "adrfam": "IPv4", 00:18:09.567 "traddr": "10.0.0.2", 00:18:09.567 "trsvcid": "4420" 00:18:09.567 }, 00:18:09.567 "secure_channel": true 00:18:09.567 } 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 } 00:18:09.567 ] 00:18:09.567 }' 00:18:09.567 12:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:09.824 12:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:09.824 "subsystems": [ 00:18:09.824 { 00:18:09.824 "subsystem": "keyring", 00:18:09.824 "config": [] 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "subsystem": "iobuf", 00:18:09.824 "config": [ 00:18:09.824 { 00:18:09.824 "method": "iobuf_set_options", 00:18:09.824 "params": { 00:18:09.824 "small_pool_count": 8192, 00:18:09.824 "large_pool_count": 1024, 00:18:09.824 "small_bufsize": 8192, 00:18:09.824 "large_bufsize": 135168 00:18:09.824 } 00:18:09.824 } 00:18:09.824 ] 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "subsystem": "sock", 00:18:09.824 "config": [ 00:18:09.824 { 00:18:09.824 "method": "sock_set_default_impl", 00:18:09.824 "params": { 00:18:09.824 "impl_name": "posix" 00:18:09.824 } 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "method": "sock_impl_set_options", 00:18:09.824 "params": { 00:18:09.824 "impl_name": "ssl", 00:18:09.824 "recv_buf_size": 4096, 00:18:09.824 "send_buf_size": 4096, 00:18:09.824 "enable_recv_pipe": true, 00:18:09.824 "enable_quickack": false, 00:18:09.824 "enable_placement_id": 0, 00:18:09.824 "enable_zerocopy_send_server": true, 00:18:09.824 "enable_zerocopy_send_client": false, 00:18:09.824 "zerocopy_threshold": 0, 00:18:09.824 "tls_version": 0, 00:18:09.824 "enable_ktls": false 00:18:09.824 } 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "method": "sock_impl_set_options", 00:18:09.824 "params": { 00:18:09.824 "impl_name": "posix", 00:18:09.824 "recv_buf_size": 2097152, 00:18:09.824 "send_buf_size": 2097152, 00:18:09.824 "enable_recv_pipe": true, 00:18:09.824 "enable_quickack": false, 00:18:09.824 "enable_placement_id": 0, 00:18:09.824 "enable_zerocopy_send_server": true, 00:18:09.824 "enable_zerocopy_send_client": false, 00:18:09.824 "zerocopy_threshold": 0, 00:18:09.824 "tls_version": 0, 00:18:09.824 "enable_ktls": false 00:18:09.824 } 00:18:09.824 } 00:18:09.824 ] 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "subsystem": "vmd", 00:18:09.824 "config": [] 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "subsystem": "accel", 00:18:09.824 "config": [ 00:18:09.824 { 00:18:09.824 "method": "accel_set_options", 00:18:09.824 "params": { 00:18:09.824 "small_cache_size": 128, 00:18:09.824 "large_cache_size": 16, 00:18:09.824 "task_count": 2048, 00:18:09.824 "sequence_count": 2048, 00:18:09.824 "buf_count": 2048 00:18:09.824 } 00:18:09.824 } 00:18:09.824 ] 00:18:09.824 }, 00:18:09.824 { 00:18:09.824 "subsystem": "bdev", 00:18:09.824 "config": [ 00:18:09.824 { 00:18:09.824 "method": "bdev_set_options", 00:18:09.824 "params": { 00:18:09.824 "bdev_io_pool_size": 65535, 00:18:09.824 "bdev_io_cache_size": 256, 00:18:09.825 "bdev_auto_examine": true, 00:18:09.825 "iobuf_small_cache_size": 128, 00:18:09.825 "iobuf_large_cache_size": 16 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_raid_set_options", 00:18:09.825 "params": { 00:18:09.825 "process_window_size_kb": 1024 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_iscsi_set_options", 00:18:09.825 "params": { 00:18:09.825 "timeout_sec": 30 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_nvme_set_options", 00:18:09.825 "params": { 00:18:09.825 "action_on_timeout": "none", 00:18:09.825 "timeout_us": 0, 00:18:09.825 "timeout_admin_us": 0, 00:18:09.825 "keep_alive_timeout_ms": 10000, 00:18:09.825 "arbitration_burst": 0, 00:18:09.825 "low_priority_weight": 0, 00:18:09.825 "medium_priority_weight": 0, 00:18:09.825 "high_priority_weight": 0, 00:18:09.825 "nvme_adminq_poll_period_us": 10000, 00:18:09.825 "nvme_ioq_poll_period_us": 0, 00:18:09.825 "io_queue_requests": 512, 00:18:09.825 "delay_cmd_submit": true, 00:18:09.825 "transport_retry_count": 4, 00:18:09.825 "bdev_retry_count": 3, 00:18:09.825 "transport_ack_timeout": 0, 00:18:09.825 "ctrlr_loss_timeout_sec": 0, 00:18:09.825 "reconnect_delay_sec": 0, 00:18:09.825 "fast_io_fail_timeout_sec": 0, 00:18:09.825 "disable_auto_failback": false, 00:18:09.825 "generate_uuids": false, 00:18:09.825 "transport_tos": 0, 00:18:09.825 "nvme_error_stat": false, 00:18:09.825 "rdma_srq_size": 0, 00:18:09.825 "io_path_stat": false, 00:18:09.825 "allow_accel_sequence": false, 00:18:09.825 "rdma_max_cq_size": 0, 00:18:09.825 "rdma_cm_event_timeout_ms": 0, 00:18:09.825 "dhchap_digests": [ 00:18:09.825 "sha256", 00:18:09.825 "sha384", 00:18:09.825 "sha512" 00:18:09.825 ], 00:18:09.825 "dhchap_dhgroups": [ 00:18:09.825 "null", 00:18:09.825 "ffdhe2048", 00:18:09.825 "ffdhe3072", 00:18:09.825 "ffdhe4096", 00:18:09.825 "ffdhe6144", 00:18:09.825 "ffdhe8192" 00:18:09.825 ] 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_nvme_attach_controller", 00:18:09.825 "params": { 00:18:09.825 "name": "TLSTEST", 00:18:09.825 "trtype": "TCP", 00:18:09.825 "adrfam": "IPv4", 00:18:09.825 "traddr": "10.0.0.2", 00:18:09.825 "trsvcid": "4420", 00:18:09.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.825 "prchk_reftag": false, 00:18:09.825 "prchk_guard": false, 00:18:09.825 "ctrlr_loss_timeout_sec": 0, 00:18:09.825 "reconnect_delay_sec": 0, 00:18:09.825 "fast_io_fail_timeout_sec": 0, 00:18:09.825 "psk": "/tmp/tmp.wJH5AmeQHC", 00:18:09.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.825 "hdgst": false, 00:18:09.825 "ddgst": false 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_nvme_set_hotplug", 00:18:09.825 "params": { 00:18:09.825 "period_us": 100000, 00:18:09.825 "enable": false 00:18:09.825 } 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "method": "bdev_wait_for_examine" 00:18:09.825 } 00:18:09.825 ] 00:18:09.825 }, 00:18:09.825 { 00:18:09.825 "subsystem": "nbd", 00:18:09.825 "config": [] 00:18:09.825 } 00:18:09.825 ] 00:18:09.825 }' 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 3416858 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3416858 ']' 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3416858 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416858 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416858' 00:18:09.825 killing process with pid 3416858 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3416858 00:18:09.825 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.825 00:18:09.825 Latency(us) 00:18:09.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.825 =================================================================================================================== 00:18:09.825 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.825 [2024-07-15 12:57:27.914129] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.825 12:57:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3416858 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 3416574 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3416574 ']' 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3416574 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3416574 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3416574' 00:18:10.082 killing process with pid 3416574 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3416574 00:18:10.082 [2024-07-15 12:57:28.211055] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.082 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3416574 00:18:10.340 12:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:10.340 12:57:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.340 12:57:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:10.340 "subsystems": [ 00:18:10.340 { 00:18:10.340 "subsystem": "keyring", 00:18:10.340 "config": [] 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "subsystem": "iobuf", 00:18:10.340 "config": [ 00:18:10.340 { 00:18:10.340 "method": "iobuf_set_options", 00:18:10.340 "params": { 00:18:10.340 "small_pool_count": 8192, 00:18:10.340 "large_pool_count": 1024, 00:18:10.340 "small_bufsize": 8192, 00:18:10.340 "large_bufsize": 135168 00:18:10.340 } 00:18:10.340 } 00:18:10.340 ] 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "subsystem": "sock", 00:18:10.340 "config": [ 00:18:10.340 { 00:18:10.340 "method": "sock_set_default_impl", 00:18:10.340 "params": { 00:18:10.340 "impl_name": "posix" 00:18:10.340 } 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "method": "sock_impl_set_options", 00:18:10.340 "params": { 00:18:10.340 "impl_name": "ssl", 00:18:10.340 "recv_buf_size": 4096, 00:18:10.340 "send_buf_size": 4096, 00:18:10.340 "enable_recv_pipe": true, 00:18:10.340 "enable_quickack": false, 00:18:10.340 "enable_placement_id": 0, 00:18:10.340 "enable_zerocopy_send_server": true, 00:18:10.340 "enable_zerocopy_send_client": false, 00:18:10.340 "zerocopy_threshold": 0, 00:18:10.340 "tls_version": 0, 00:18:10.340 "enable_ktls": false 00:18:10.340 } 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "method": "sock_impl_set_options", 00:18:10.340 "params": { 00:18:10.340 "impl_name": "posix", 00:18:10.340 "recv_buf_size": 2097152, 00:18:10.340 "send_buf_size": 2097152, 00:18:10.340 "enable_recv_pipe": true, 00:18:10.340 "enable_quickack": false, 00:18:10.340 "enable_placement_id": 0, 00:18:10.340 "enable_zerocopy_send_server": true, 00:18:10.340 "enable_zerocopy_send_client": false, 00:18:10.340 "zerocopy_threshold": 0, 00:18:10.340 "tls_version": 0, 00:18:10.340 "enable_ktls": false 00:18:10.340 } 00:18:10.340 } 00:18:10.340 ] 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "subsystem": "vmd", 00:18:10.340 "config": [] 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "subsystem": "accel", 00:18:10.340 "config": [ 00:18:10.340 { 00:18:10.340 "method": "accel_set_options", 00:18:10.340 "params": { 00:18:10.340 "small_cache_size": 128, 00:18:10.340 "large_cache_size": 16, 00:18:10.340 "task_count": 2048, 00:18:10.340 "sequence_count": 2048, 00:18:10.340 "buf_count": 2048 00:18:10.340 } 00:18:10.340 } 00:18:10.340 ] 00:18:10.340 }, 00:18:10.340 { 00:18:10.340 "subsystem": "bdev", 00:18:10.340 "config": [ 00:18:10.340 { 00:18:10.340 "method": "bdev_set_options", 00:18:10.340 "params": { 00:18:10.340 "bdev_io_pool_size": 65535, 00:18:10.340 "bdev_io_cache_size": 256, 00:18:10.341 "bdev_auto_examine": true, 00:18:10.341 "iobuf_small_cache_size": 128, 00:18:10.341 "iobuf_large_cache_size": 16 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_raid_set_options", 00:18:10.341 "params": { 00:18:10.341 "process_window_size_kb": 1024 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_iscsi_set_options", 00:18:10.341 "params": { 00:18:10.341 "timeout_sec": 30 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_nvme_set_options", 00:18:10.341 "params": { 00:18:10.341 "action_on_timeout": "none", 00:18:10.341 "timeout_us": 0, 00:18:10.341 "timeout_admin_us": 0, 00:18:10.341 "keep_alive_timeout_ms": 10000, 00:18:10.341 "arbitration_burst": 0, 00:18:10.341 "low_priority_weight": 0, 00:18:10.341 "medium_priority_weight": 0, 00:18:10.341 "high_priority_weight": 0, 00:18:10.341 "nvme_adminq_poll_period_us": 10000, 00:18:10.341 "nvme_ioq_poll_period_us": 0, 00:18:10.341 "io_queue_requests": 0, 00:18:10.341 "delay_cmd_submit": true, 00:18:10.341 "transport_retry_count": 4, 00:18:10.341 "bdev_retry_count": 3, 00:18:10.341 "transport_ack_timeout": 0, 00:18:10.341 "ctrlr_loss_timeout_sec": 0, 00:18:10.341 "reconnect_delay_sec": 0, 00:18:10.341 "fast_io_fail_timeout_sec": 0, 00:18:10.341 "disable_auto_failback": false, 00:18:10.341 "generate_uuids": false, 00:18:10.341 "transport_tos": 0, 00:18:10.341 "nvme_error_stat": false, 00:18:10.341 "rdma_srq_size": 0, 00:18:10.341 "io_path_stat": false, 00:18:10.341 "allow_accel_sequence": false, 00:18:10.341 "rdma_max_cq_size": 0, 00:18:10.341 "rdma_cm_event_timeout_ms": 0, 00:18:10.341 "dhchap_digests": [ 00:18:10.341 "sha256", 00:18:10.341 "sha384", 00:18:10.341 "sha512" 00:18:10.341 ], 00:18:10.341 "dhchap_dhgroups": [ 00:18:10.341 "null", 00:18:10.341 "ffdhe2048", 00:18:10.341 "ffdhe3072", 00:18:10.341 "ffdhe4096", 00:18:10.341 "ffdhe 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.341 6144", 00:18:10.341 "ffdhe8192" 00:18:10.341 ] 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_nvme_set_hotplug", 00:18:10.341 "params": { 00:18:10.341 "period_us": 100000, 00:18:10.341 "enable": false 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_malloc_create", 00:18:10.341 "params": { 00:18:10.341 "name": "malloc0", 00:18:10.341 "num_blocks": 8192, 00:18:10.341 "block_size": 4096, 00:18:10.341 "physical_block_size": 4096, 00:18:10.341 "uuid": "0014b6be-0aa9-4be8-b23e-7cb5ba7be897", 00:18:10.341 "optimal_io_boundary": 0 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "bdev_wait_for_examine" 00:18:10.341 } 00:18:10.341 ] 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "subsystem": "nbd", 00:18:10.341 "config": [] 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "subsystem": "scheduler", 00:18:10.341 "config": [ 00:18:10.341 { 00:18:10.341 "method": "framework_set_scheduler", 00:18:10.341 "params": { 00:18:10.341 "name": "static" 00:18:10.341 } 00:18:10.341 } 00:18:10.341 ] 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "subsystem": "nvmf", 00:18:10.341 "config": [ 00:18:10.341 { 00:18:10.341 "method": "nvmf_set_config", 00:18:10.341 "params": { 00:18:10.341 "discovery_filter": "match_any", 00:18:10.341 "admin_cmd_passthru": { 00:18:10.341 "identify_ctrlr": false 00:18:10.341 } 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_set_max_subsystems", 00:18:10.341 "params": { 00:18:10.341 "max_subsystems": 1024 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_set_crdt", 00:18:10.341 "params": { 00:18:10.341 "crdt1": 0, 00:18:10.341 "crdt2": 0, 00:18:10.341 "crdt3": 0 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_create_transport", 00:18:10.341 "params": { 00:18:10.341 "trtype": "TCP", 00:18:10.341 "max_queue_depth": 128, 00:18:10.341 "max_io_qpairs_per_ctrlr": 127, 00:18:10.341 "in_capsule_data_size": 4096, 00:18:10.341 "max_io_size": 131072, 00:18:10.341 "io_unit_size": 131072, 00:18:10.341 "max_aq_depth": 128, 00:18:10.341 "num_shared_buffers": 511, 00:18:10.341 "buf_cache_size": 4294967295, 00:18:10.341 "dif_insert_or_strip": false, 00:18:10.341 "zcopy": false, 00:18:10.341 "c2h_success": false, 00:18:10.341 "sock_priority": 0, 00:18:10.341 "abort_timeout_sec": 1, 00:18:10.341 "ack_timeout": 0, 00:18:10.341 "data_wr_pool_size": 0 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_create_subsystem", 00:18:10.341 "params": { 00:18:10.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.341 "allow_any_host": false, 00:18:10.341 "serial_number": "SPDK00000000000001", 00:18:10.341 "model_number": "SPDK bdev Controller", 00:18:10.341 "max_namespaces": 10, 00:18:10.341 "min_cntlid": 1, 00:18:10.341 "max_cntlid": 65519, 00:18:10.341 "ana_reporting": false 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_subsystem_add_host", 00:18:10.341 "params": { 00:18:10.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.341 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.341 "psk": "/tmp/tmp.wJH5AmeQHC" 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_subsystem_add_ns", 00:18:10.341 "params": { 00:18:10.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.341 "namespace": { 00:18:10.341 "nsid": 1, 00:18:10.341 "bdev_name": "malloc0", 00:18:10.341 "nguid": "0014B6BE0AA94BE8B23E7CB5BA7BE897", 00:18:10.341 "uuid": "0014b6be-0aa9-4be8-b23e-7cb5ba7be897", 00:18:10.341 "no_auto_visible": false 00:18:10.341 } 00:18:10.341 } 00:18:10.341 }, 00:18:10.341 { 00:18:10.341 "method": "nvmf_subsystem_add_listener", 00:18:10.341 "params": { 00:18:10.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.341 "listen_address": { 00:18:10.341 "trtype": "TCP", 00:18:10.341 "adrfam": "IPv4", 00:18:10.341 "traddr": "10.0.0.2", 00:18:10.341 "trsvcid": "4420" 00:18:10.341 }, 00:18:10.341 "secure_channel": true 00:18:10.341 } 00:18:10.341 } 00:18:10.341 ] 00:18:10.341 } 00:18:10.341 ] 00:18:10.341 }' 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3417020 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3417020 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3417020 ']' 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.341 12:57:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.613 [2024-07-15 12:57:28.547042] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:10.613 [2024-07-15 12:57:28.547122] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.613 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.613 [2024-07-15 12:57:28.613134] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.613 [2024-07-15 12:57:28.724768] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.613 [2024-07-15 12:57:28.724836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.613 [2024-07-15 12:57:28.724850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.613 [2024-07-15 12:57:28.724862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.613 [2024-07-15 12:57:28.724872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.613 [2024-07-15 12:57:28.724964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.879 [2024-07-15 12:57:28.954888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.879 [2024-07-15 12:57:28.970854] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:10.879 [2024-07-15 12:57:28.986896] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.879 [2024-07-15 12:57:28.996888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3417170 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3417170 /var/tmp/bdevperf.sock 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3417170 ']' 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.442 12:57:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:11.442 "subsystems": [ 00:18:11.442 { 00:18:11.442 "subsystem": "keyring", 00:18:11.442 "config": [] 00:18:11.442 }, 00:18:11.442 { 00:18:11.442 "subsystem": "iobuf", 00:18:11.443 "config": [ 00:18:11.443 { 00:18:11.443 "method": "iobuf_set_options", 00:18:11.443 "params": { 00:18:11.443 "small_pool_count": 8192, 00:18:11.443 "large_pool_count": 1024, 00:18:11.443 "small_bufsize": 8192, 00:18:11.443 "large_bufsize": 135168 00:18:11.443 } 00:18:11.443 } 00:18:11.443 ] 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "subsystem": "sock", 00:18:11.443 "config": [ 00:18:11.443 { 00:18:11.443 "method": "sock_set_default_impl", 00:18:11.443 "params": { 00:18:11.443 "impl_name": "posix" 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "sock_impl_set_options", 00:18:11.443 "params": { 00:18:11.443 "impl_name": "ssl", 00:18:11.443 "recv_buf_size": 4096, 00:18:11.443 "send_buf_size": 4096, 00:18:11.443 "enable_recv_pipe": true, 00:18:11.443 "enable_quickack": false, 00:18:11.443 "enable_placement_id": 0, 00:18:11.443 "enable_zerocopy_send_server": true, 00:18:11.443 "enable_zerocopy_send_client": false, 00:18:11.443 "zerocopy_threshold": 0, 00:18:11.443 "tls_version": 0, 00:18:11.443 "enable_ktls": false 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "sock_impl_set_options", 00:18:11.443 "params": { 00:18:11.443 "impl_name": "posix", 00:18:11.443 "recv_buf_size": 2097152, 00:18:11.443 "send_buf_size": 2097152, 00:18:11.443 "enable_recv_pipe": true, 00:18:11.443 "enable_quickack": false, 00:18:11.443 "enable_placement_id": 0, 00:18:11.443 "enable_zerocopy_send_server": true, 00:18:11.443 "enable_zerocopy_send_client": false, 00:18:11.443 "zerocopy_threshold": 0, 00:18:11.443 "tls_version": 0, 00:18:11.443 "enable_ktls": false 00:18:11.443 } 00:18:11.443 } 00:18:11.443 ] 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "subsystem": "vmd", 00:18:11.443 "config": [] 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "subsystem": "accel", 00:18:11.443 "config": [ 00:18:11.443 { 00:18:11.443 "method": "accel_set_options", 00:18:11.443 "params": { 00:18:11.443 "small_cache_size": 128, 00:18:11.443 "large_cache_size": 16, 00:18:11.443 "task_count": 2048, 00:18:11.443 "sequence_count": 2048, 00:18:11.443 "buf_count": 2048 00:18:11.443 } 00:18:11.443 } 00:18:11.443 ] 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "subsystem": "bdev", 00:18:11.443 "config": [ 00:18:11.443 { 00:18:11.443 "method": "bdev_set_options", 00:18:11.443 "params": { 00:18:11.443 "bdev_io_pool_size": 65535, 00:18:11.443 "bdev_io_cache_size": 256, 00:18:11.443 "bdev_auto_examine": true, 00:18:11.443 "iobuf_small_cache_size": 128, 00:18:11.443 "iobuf_large_cache_size": 16 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_raid_set_options", 00:18:11.443 "params": { 00:18:11.443 "process_window_size_kb": 1024 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_iscsi_set_options", 00:18:11.443 "params": { 00:18:11.443 "timeout_sec": 30 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_nvme_set_options", 00:18:11.443 "params": { 00:18:11.443 "action_on_timeout": "none", 00:18:11.443 "timeout_us": 0, 00:18:11.443 "timeout_admin_us": 0, 00:18:11.443 "keep_alive_timeout_ms": 10000, 00:18:11.443 "arbitration_burst": 0, 00:18:11.443 "low_priority_weight": 0, 00:18:11.443 "medium_priority_weight": 0, 00:18:11.443 "high_priority_weight": 0, 00:18:11.443 "nvme_adminq_poll_period_us": 10000, 00:18:11.443 "nvme_ioq_poll_period_us": 0, 00:18:11.443 "io_queue_requests": 512, 00:18:11.443 "delay_cmd_submit": true, 00:18:11.443 "transport_retry_count": 4, 00:18:11.443 "bdev_retry_count": 3, 00:18:11.443 "transport_ack_timeout": 0, 00:18:11.443 "ctrlr_loss_timeout_sec": 0, 00:18:11.443 "reconnect_delay_sec": 0, 00:18:11.443 "fast_io_fail_timeout_sec": 0, 00:18:11.443 "disable_auto_failback": false, 00:18:11.443 "generate_uuids": false, 00:18:11.443 "transport_tos": 0, 00:18:11.443 "nvme_error_stat": false, 00:18:11.443 "rdma_srq_size": 0, 00:18:11.443 "io_path_stat": false, 00:18:11.443 "allow_accel_sequence": false, 00:18:11.443 "rdma_max_cq_size": 0, 00:18:11.443 "rdma_cm_event_timeout_ms": 0, 00:18:11.443 "dhchap_digests": [ 00:18:11.443 "sha256", 00:18:11.443 "sha384", 00:18:11.443 "sha512" 00:18:11.443 ], 00:18:11.443 "dhchap_dhgroups": [ 00:18:11.443 "null", 00:18:11.443 "ffdhe2048", 00:18:11.443 "ffdhe3072", 00:18:11.443 "ffdhe4096", 00:18:11.443 "ffdhe6144", 00:18:11.443 "ffdhe8192" 00:18:11.443 ] 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_nvme_attach_controller", 00:18:11.443 "params": { 00:18:11.443 "name": "TLSTEST", 00:18:11.443 "trtype": "TCP", 00:18:11.443 "adrfam": "IPv4", 00:18:11.443 "traddr": "10.0.0.2", 00:18:11.443 "trsvcid": "4420", 00:18:11.443 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.443 "prchk_reftag": false, 00:18:11.443 "prchk_guard": false, 00:18:11.443 "ctrlr_loss_timeout_sec": 0, 00:18:11.443 "reconnect_delay_sec": 0, 00:18:11.443 "fast_io_fail_timeout_sec": 0, 00:18:11.443 "psk": "/tmp/tmp.wJH5AmeQHC", 00:18:11.443 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.443 "hdgst": false, 00:18:11.443 "ddgst": false 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_nvme_set_hotplug", 00:18:11.443 "params": { 00:18:11.443 "period_us": 100000, 00:18:11.443 "enable": false 00:18:11.443 } 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "method": "bdev_wait_for_examine" 00:18:11.443 } 00:18:11.443 ] 00:18:11.443 }, 00:18:11.443 { 00:18:11.443 "subsystem": "nbd", 00:18:11.443 "config": [] 00:18:11.443 } 00:18:11.443 ] 00:18:11.443 }' 00:18:11.443 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.443 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.443 12:57:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.443 [2024-07-15 12:57:29.623625] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:11.443 [2024-07-15 12:57:29.623698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3417170 ] 00:18:11.701 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.701 [2024-07-15 12:57:29.681904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.701 [2024-07-15 12:57:29.790493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.958 [2024-07-15 12:57:29.956281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.958 [2024-07-15 12:57:29.956433] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:12.521 12:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.521 12:57:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:12.521 12:57:30 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:12.776 Running I/O for 10 seconds... 00:18:22.745 00:18:22.745 Latency(us) 00:18:22.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.745 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.745 Verification LBA range: start 0x0 length 0x2000 00:18:22.745 TLSTESTn1 : 10.02 3656.83 14.28 0.00 0.00 34945.81 8883.77 46409.20 00:18:22.745 =================================================================================================================== 00:18:22.745 Total : 3656.83 14.28 0.00 0.00 34945.81 8883.77 46409.20 00:18:22.745 0 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 3417170 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3417170 ']' 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3417170 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3417170 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3417170' 00:18:22.745 killing process with pid 3417170 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3417170 00:18:22.745 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.745 00:18:22.745 Latency(us) 00:18:22.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.745 =================================================================================================================== 00:18:22.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.745 [2024-07-15 12:57:40.819401] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:22.745 12:57:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3417170 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 3417020 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3417020 ']' 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3417020 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3417020 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3417020' 00:18:23.004 killing process with pid 3417020 00:18:23.004 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3417020 00:18:23.004 [2024-07-15 12:57:41.105276] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:23.005 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3417020 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3418617 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3418617 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3418617 ']' 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.263 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.263 [2024-07-15 12:57:41.426366] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:23.263 [2024-07-15 12:57:41.426456] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.263 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.522 [2024-07-15 12:57:41.489135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.522 [2024-07-15 12:57:41.590974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.522 [2024-07-15 12:57:41.591045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.522 [2024-07-15 12:57:41.591076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.522 [2024-07-15 12:57:41.591087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.522 [2024-07-15 12:57:41.591097] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.522 [2024-07-15 12:57:41.591124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.wJH5AmeQHC 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.wJH5AmeQHC 00:18:23.522 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:23.781 [2024-07-15 12:57:41.960989] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:23.781 12:57:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:24.071 12:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:24.638 [2024-07-15 12:57:42.546530] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:24.638 [2024-07-15 12:57:42.546795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.638 12:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:24.638 malloc0 00:18:24.638 12:57:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:24.898 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wJH5AmeQHC 00:18:25.156 [2024-07-15 12:57:43.292166] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3418778 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3418778 /var/tmp/bdevperf.sock 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3418778 ']' 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.156 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.156 [2024-07-15 12:57:43.346355] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:25.156 [2024-07-15 12:57:43.346437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418778 ] 00:18:25.416 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.416 [2024-07-15 12:57:43.408571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.416 [2024-07-15 12:57:43.516326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.699 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.699 12:57:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.699 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wJH5AmeQHC 00:18:25.699 12:57:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:25.959 [2024-07-15 12:57:44.124425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.218 nvme0n1 00:18:26.218 12:57:44 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.218 Running I/O for 1 seconds... 00:18:27.153 00:18:27.153 Latency(us) 00:18:27.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.153 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:27.153 Verification LBA range: start 0x0 length 0x2000 00:18:27.153 nvme0n1 : 1.02 3706.64 14.48 0.00 0.00 34219.02 7330.32 31263.10 00:18:27.153 =================================================================================================================== 00:18:27.153 Total : 3706.64 14.48 0.00 0.00 34219.02 7330.32 31263.10 00:18:27.153 0 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 3418778 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3418778 ']' 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3418778 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418778 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418778' 00:18:27.411 killing process with pid 3418778 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3418778 00:18:27.411 Received shutdown signal, test time was about 1.000000 seconds 00:18:27.411 00:18:27.411 Latency(us) 00:18:27.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.411 =================================================================================================================== 00:18:27.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.411 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3418778 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 3418617 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3418617 ']' 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3418617 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418617 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418617' 00:18:27.670 killing process with pid 3418617 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3418617 00:18:27.670 [2024-07-15 12:57:45.693169] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:27.670 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3418617 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3419178 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3419178 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419178 ']' 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.928 12:57:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.928 [2024-07-15 12:57:46.028717] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:27.928 [2024-07-15 12:57:46.028822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.928 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.928 [2024-07-15 12:57:46.091888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.187 [2024-07-15 12:57:46.197104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.187 [2024-07-15 12:57:46.197158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.187 [2024-07-15 12:57:46.197187] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.187 [2024-07-15 12:57:46.197198] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.187 [2024-07-15 12:57:46.197208] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.187 [2024-07-15 12:57:46.197236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.187 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.187 [2024-07-15 12:57:46.340511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.187 malloc0 00:18:28.187 [2024-07-15 12:57:46.372424] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.187 [2024-07-15 12:57:46.372688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=3419206 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 3419206 /var/tmp/bdevperf.sock 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419206 ']' 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.462 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.462 [2024-07-15 12:57:46.442147] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:28.462 [2024-07-15 12:57:46.442225] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419206 ] 00:18:28.462 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.462 [2024-07-15 12:57:46.499292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.462 [2024-07-15 12:57:46.605644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.719 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.719 12:57:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.719 12:57:46 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wJH5AmeQHC 00:18:28.976 12:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.233 [2024-07-15 12:57:47.229700] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.233 nvme0n1 00:18:29.233 12:57:47 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.233 Running I/O for 1 seconds... 00:18:30.606 00:18:30.606 Latency(us) 00:18:30.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.606 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.606 Verification LBA range: start 0x0 length 0x2000 00:18:30.606 nvme0n1 : 1.02 3672.54 14.35 0.00 0.00 34521.24 6893.42 31845.64 00:18:30.606 =================================================================================================================== 00:18:30.606 Total : 3672.54 14.35 0.00 0.00 34521.24 6893.42 31845.64 00:18:30.606 0 00:18:30.606 12:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:30.606 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.606 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.606 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.606 12:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:18:30.606 "subsystems": [ 00:18:30.606 { 00:18:30.606 "subsystem": "keyring", 00:18:30.606 "config": [ 00:18:30.606 { 00:18:30.606 "method": "keyring_file_add_key", 00:18:30.606 "params": { 00:18:30.606 "name": "key0", 00:18:30.606 "path": "/tmp/tmp.wJH5AmeQHC" 00:18:30.606 } 00:18:30.606 } 00:18:30.606 ] 00:18:30.606 }, 00:18:30.606 { 00:18:30.606 "subsystem": "iobuf", 00:18:30.606 "config": [ 00:18:30.606 { 00:18:30.606 "method": "iobuf_set_options", 00:18:30.606 "params": { 00:18:30.606 "small_pool_count": 8192, 00:18:30.606 "large_pool_count": 1024, 00:18:30.606 "small_bufsize": 8192, 00:18:30.606 "large_bufsize": 135168 00:18:30.606 } 00:18:30.606 } 00:18:30.606 ] 00:18:30.606 }, 00:18:30.606 { 00:18:30.606 "subsystem": "sock", 00:18:30.606 "config": [ 00:18:30.606 { 00:18:30.606 "method": "sock_set_default_impl", 00:18:30.606 "params": { 00:18:30.606 "impl_name": "posix" 00:18:30.606 } 00:18:30.606 }, 00:18:30.606 { 00:18:30.606 "method": "sock_impl_set_options", 00:18:30.606 "params": { 00:18:30.606 "impl_name": "ssl", 00:18:30.606 "recv_buf_size": 4096, 00:18:30.606 "send_buf_size": 4096, 00:18:30.606 "enable_recv_pipe": true, 00:18:30.606 "enable_quickack": false, 00:18:30.606 "enable_placement_id": 0, 00:18:30.606 "enable_zerocopy_send_server": true, 00:18:30.606 "enable_zerocopy_send_client": false, 00:18:30.606 "zerocopy_threshold": 0, 00:18:30.606 "tls_version": 0, 00:18:30.606 "enable_ktls": false 00:18:30.606 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "sock_impl_set_options", 00:18:30.607 "params": { 00:18:30.607 "impl_name": "posix", 00:18:30.607 "recv_buf_size": 2097152, 00:18:30.607 "send_buf_size": 2097152, 00:18:30.607 "enable_recv_pipe": true, 00:18:30.607 "enable_quickack": false, 00:18:30.607 "enable_placement_id": 0, 00:18:30.607 "enable_zerocopy_send_server": true, 00:18:30.607 "enable_zerocopy_send_client": false, 00:18:30.607 "zerocopy_threshold": 0, 00:18:30.607 "tls_version": 0, 00:18:30.607 "enable_ktls": false 00:18:30.607 } 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "vmd", 00:18:30.607 "config": [] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "accel", 00:18:30.607 "config": [ 00:18:30.607 { 00:18:30.607 "method": "accel_set_options", 00:18:30.607 "params": { 00:18:30.607 "small_cache_size": 128, 00:18:30.607 "large_cache_size": 16, 00:18:30.607 "task_count": 2048, 00:18:30.607 "sequence_count": 2048, 00:18:30.607 "buf_count": 2048 00:18:30.607 } 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "bdev", 00:18:30.607 "config": [ 00:18:30.607 { 00:18:30.607 "method": "bdev_set_options", 00:18:30.607 "params": { 00:18:30.607 "bdev_io_pool_size": 65535, 00:18:30.607 "bdev_io_cache_size": 256, 00:18:30.607 "bdev_auto_examine": true, 00:18:30.607 "iobuf_small_cache_size": 128, 00:18:30.607 "iobuf_large_cache_size": 16 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_raid_set_options", 00:18:30.607 "params": { 00:18:30.607 "process_window_size_kb": 1024 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_iscsi_set_options", 00:18:30.607 "params": { 00:18:30.607 "timeout_sec": 30 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_nvme_set_options", 00:18:30.607 "params": { 00:18:30.607 "action_on_timeout": "none", 00:18:30.607 "timeout_us": 0, 00:18:30.607 "timeout_admin_us": 0, 00:18:30.607 "keep_alive_timeout_ms": 10000, 00:18:30.607 "arbitration_burst": 0, 00:18:30.607 "low_priority_weight": 0, 00:18:30.607 "medium_priority_weight": 0, 00:18:30.607 "high_priority_weight": 0, 00:18:30.607 "nvme_adminq_poll_period_us": 10000, 00:18:30.607 "nvme_ioq_poll_period_us": 0, 00:18:30.607 "io_queue_requests": 0, 00:18:30.607 "delay_cmd_submit": true, 00:18:30.607 "transport_retry_count": 4, 00:18:30.607 "bdev_retry_count": 3, 00:18:30.607 "transport_ack_timeout": 0, 00:18:30.607 "ctrlr_loss_timeout_sec": 0, 00:18:30.607 "reconnect_delay_sec": 0, 00:18:30.607 "fast_io_fail_timeout_sec": 0, 00:18:30.607 "disable_auto_failback": false, 00:18:30.607 "generate_uuids": false, 00:18:30.607 "transport_tos": 0, 00:18:30.607 "nvme_error_stat": false, 00:18:30.607 "rdma_srq_size": 0, 00:18:30.607 "io_path_stat": false, 00:18:30.607 "allow_accel_sequence": false, 00:18:30.607 "rdma_max_cq_size": 0, 00:18:30.607 "rdma_cm_event_timeout_ms": 0, 00:18:30.607 "dhchap_digests": [ 00:18:30.607 "sha256", 00:18:30.607 "sha384", 00:18:30.607 "sha512" 00:18:30.607 ], 00:18:30.607 "dhchap_dhgroups": [ 00:18:30.607 "null", 00:18:30.607 "ffdhe2048", 00:18:30.607 "ffdhe3072", 00:18:30.607 "ffdhe4096", 00:18:30.607 "ffdhe6144", 00:18:30.607 "ffdhe8192" 00:18:30.607 ] 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_nvme_set_hotplug", 00:18:30.607 "params": { 00:18:30.607 "period_us": 100000, 00:18:30.607 "enable": false 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_malloc_create", 00:18:30.607 "params": { 00:18:30.607 "name": "malloc0", 00:18:30.607 "num_blocks": 8192, 00:18:30.607 "block_size": 4096, 00:18:30.607 "physical_block_size": 4096, 00:18:30.607 "uuid": "16dcd425-2fd8-44ca-869f-a55ea3e0479a", 00:18:30.607 "optimal_io_boundary": 0 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "bdev_wait_for_examine" 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "nbd", 00:18:30.607 "config": [] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "scheduler", 00:18:30.607 "config": [ 00:18:30.607 { 00:18:30.607 "method": "framework_set_scheduler", 00:18:30.607 "params": { 00:18:30.607 "name": "static" 00:18:30.607 } 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "subsystem": "nvmf", 00:18:30.607 "config": [ 00:18:30.607 { 00:18:30.607 "method": "nvmf_set_config", 00:18:30.607 "params": { 00:18:30.607 "discovery_filter": "match_any", 00:18:30.607 "admin_cmd_passthru": { 00:18:30.607 "identify_ctrlr": false 00:18:30.607 } 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_set_max_subsystems", 00:18:30.607 "params": { 00:18:30.607 "max_subsystems": 1024 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_set_crdt", 00:18:30.607 "params": { 00:18:30.607 "crdt1": 0, 00:18:30.607 "crdt2": 0, 00:18:30.607 "crdt3": 0 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_create_transport", 00:18:30.607 "params": { 00:18:30.607 "trtype": "TCP", 00:18:30.607 "max_queue_depth": 128, 00:18:30.607 "max_io_qpairs_per_ctrlr": 127, 00:18:30.607 "in_capsule_data_size": 4096, 00:18:30.607 "max_io_size": 131072, 00:18:30.607 "io_unit_size": 131072, 00:18:30.607 "max_aq_depth": 128, 00:18:30.607 "num_shared_buffers": 511, 00:18:30.607 "buf_cache_size": 4294967295, 00:18:30.607 "dif_insert_or_strip": false, 00:18:30.607 "zcopy": false, 00:18:30.607 "c2h_success": false, 00:18:30.607 "sock_priority": 0, 00:18:30.607 "abort_timeout_sec": 1, 00:18:30.607 "ack_timeout": 0, 00:18:30.607 "data_wr_pool_size": 0 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_create_subsystem", 00:18:30.607 "params": { 00:18:30.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.607 "allow_any_host": false, 00:18:30.607 "serial_number": "00000000000000000000", 00:18:30.607 "model_number": "SPDK bdev Controller", 00:18:30.607 "max_namespaces": 32, 00:18:30.607 "min_cntlid": 1, 00:18:30.607 "max_cntlid": 65519, 00:18:30.607 "ana_reporting": false 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_subsystem_add_host", 00:18:30.607 "params": { 00:18:30.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.607 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.607 "psk": "key0" 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_subsystem_add_ns", 00:18:30.607 "params": { 00:18:30.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.607 "namespace": { 00:18:30.607 "nsid": 1, 00:18:30.607 "bdev_name": "malloc0", 00:18:30.607 "nguid": "16DCD4252FD844CA869FA55EA3E0479A", 00:18:30.607 "uuid": "16dcd425-2fd8-44ca-869f-a55ea3e0479a", 00:18:30.607 "no_auto_visible": false 00:18:30.607 } 00:18:30.607 } 00:18:30.607 }, 00:18:30.607 { 00:18:30.607 "method": "nvmf_subsystem_add_listener", 00:18:30.607 "params": { 00:18:30.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.607 "listen_address": { 00:18:30.607 "trtype": "TCP", 00:18:30.607 "adrfam": "IPv4", 00:18:30.607 "traddr": "10.0.0.2", 00:18:30.607 "trsvcid": "4420" 00:18:30.607 }, 00:18:30.607 "secure_channel": true 00:18:30.607 } 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 } 00:18:30.607 ] 00:18:30.607 }' 00:18:30.607 12:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:30.866 12:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:18:30.866 "subsystems": [ 00:18:30.866 { 00:18:30.866 "subsystem": "keyring", 00:18:30.866 "config": [ 00:18:30.866 { 00:18:30.866 "method": "keyring_file_add_key", 00:18:30.866 "params": { 00:18:30.866 "name": "key0", 00:18:30.866 "path": "/tmp/tmp.wJH5AmeQHC" 00:18:30.866 } 00:18:30.866 } 00:18:30.866 ] 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "subsystem": "iobuf", 00:18:30.866 "config": [ 00:18:30.866 { 00:18:30.866 "method": "iobuf_set_options", 00:18:30.866 "params": { 00:18:30.866 "small_pool_count": 8192, 00:18:30.866 "large_pool_count": 1024, 00:18:30.866 "small_bufsize": 8192, 00:18:30.866 "large_bufsize": 135168 00:18:30.866 } 00:18:30.866 } 00:18:30.866 ] 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "subsystem": "sock", 00:18:30.866 "config": [ 00:18:30.866 { 00:18:30.866 "method": "sock_set_default_impl", 00:18:30.866 "params": { 00:18:30.866 "impl_name": "posix" 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "sock_impl_set_options", 00:18:30.866 "params": { 00:18:30.866 "impl_name": "ssl", 00:18:30.866 "recv_buf_size": 4096, 00:18:30.866 "send_buf_size": 4096, 00:18:30.866 "enable_recv_pipe": true, 00:18:30.866 "enable_quickack": false, 00:18:30.866 "enable_placement_id": 0, 00:18:30.866 "enable_zerocopy_send_server": true, 00:18:30.866 "enable_zerocopy_send_client": false, 00:18:30.866 "zerocopy_threshold": 0, 00:18:30.866 "tls_version": 0, 00:18:30.866 "enable_ktls": false 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "sock_impl_set_options", 00:18:30.866 "params": { 00:18:30.866 "impl_name": "posix", 00:18:30.866 "recv_buf_size": 2097152, 00:18:30.866 "send_buf_size": 2097152, 00:18:30.866 "enable_recv_pipe": true, 00:18:30.866 "enable_quickack": false, 00:18:30.866 "enable_placement_id": 0, 00:18:30.866 "enable_zerocopy_send_server": true, 00:18:30.866 "enable_zerocopy_send_client": false, 00:18:30.866 "zerocopy_threshold": 0, 00:18:30.866 "tls_version": 0, 00:18:30.866 "enable_ktls": false 00:18:30.866 } 00:18:30.866 } 00:18:30.866 ] 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "subsystem": "vmd", 00:18:30.866 "config": [] 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "subsystem": "accel", 00:18:30.866 "config": [ 00:18:30.866 { 00:18:30.866 "method": "accel_set_options", 00:18:30.866 "params": { 00:18:30.866 "small_cache_size": 128, 00:18:30.866 "large_cache_size": 16, 00:18:30.866 "task_count": 2048, 00:18:30.866 "sequence_count": 2048, 00:18:30.866 "buf_count": 2048 00:18:30.866 } 00:18:30.866 } 00:18:30.866 ] 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "subsystem": "bdev", 00:18:30.866 "config": [ 00:18:30.866 { 00:18:30.866 "method": "bdev_set_options", 00:18:30.866 "params": { 00:18:30.866 "bdev_io_pool_size": 65535, 00:18:30.866 "bdev_io_cache_size": 256, 00:18:30.866 "bdev_auto_examine": true, 00:18:30.866 "iobuf_small_cache_size": 128, 00:18:30.866 "iobuf_large_cache_size": 16 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "bdev_raid_set_options", 00:18:30.866 "params": { 00:18:30.866 "process_window_size_kb": 1024 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "bdev_iscsi_set_options", 00:18:30.866 "params": { 00:18:30.866 "timeout_sec": 30 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "bdev_nvme_set_options", 00:18:30.866 "params": { 00:18:30.866 "action_on_timeout": "none", 00:18:30.866 "timeout_us": 0, 00:18:30.866 "timeout_admin_us": 0, 00:18:30.866 "keep_alive_timeout_ms": 10000, 00:18:30.866 "arbitration_burst": 0, 00:18:30.866 "low_priority_weight": 0, 00:18:30.866 "medium_priority_weight": 0, 00:18:30.866 "high_priority_weight": 0, 00:18:30.866 "nvme_adminq_poll_period_us": 10000, 00:18:30.866 "nvme_ioq_poll_period_us": 0, 00:18:30.866 "io_queue_requests": 512, 00:18:30.866 "delay_cmd_submit": true, 00:18:30.866 "transport_retry_count": 4, 00:18:30.866 "bdev_retry_count": 3, 00:18:30.866 "transport_ack_timeout": 0, 00:18:30.866 "ctrlr_loss_timeout_sec": 0, 00:18:30.866 "reconnect_delay_sec": 0, 00:18:30.866 "fast_io_fail_timeout_sec": 0, 00:18:30.866 "disable_auto_failback": false, 00:18:30.866 "generate_uuids": false, 00:18:30.866 "transport_tos": 0, 00:18:30.866 "nvme_error_stat": false, 00:18:30.866 "rdma_srq_size": 0, 00:18:30.866 "io_path_stat": false, 00:18:30.866 "allow_accel_sequence": false, 00:18:30.866 "rdma_max_cq_size": 0, 00:18:30.866 "rdma_cm_event_timeout_ms": 0, 00:18:30.866 "dhchap_digests": [ 00:18:30.866 "sha256", 00:18:30.866 "sha384", 00:18:30.866 "sha512" 00:18:30.866 ], 00:18:30.866 "dhchap_dhgroups": [ 00:18:30.866 "null", 00:18:30.866 "ffdhe2048", 00:18:30.866 "ffdhe3072", 00:18:30.866 "ffdhe4096", 00:18:30.866 "ffdhe6144", 00:18:30.866 "ffdhe8192" 00:18:30.866 ] 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "bdev_nvme_attach_controller", 00:18:30.866 "params": { 00:18:30.866 "name": "nvme0", 00:18:30.866 "trtype": "TCP", 00:18:30.866 "adrfam": "IPv4", 00:18:30.866 "traddr": "10.0.0.2", 00:18:30.866 "trsvcid": "4420", 00:18:30.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.866 "prchk_reftag": false, 00:18:30.866 "prchk_guard": false, 00:18:30.866 "ctrlr_loss_timeout_sec": 0, 00:18:30.866 "reconnect_delay_sec": 0, 00:18:30.866 "fast_io_fail_timeout_sec": 0, 00:18:30.866 "psk": "key0", 00:18:30.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.866 "hdgst": false, 00:18:30.866 "ddgst": false 00:18:30.866 } 00:18:30.866 }, 00:18:30.866 { 00:18:30.866 "method": "bdev_nvme_set_hotplug", 00:18:30.866 "params": { 00:18:30.866 "period_us": 100000, 00:18:30.866 "enable": false 00:18:30.867 } 00:18:30.867 }, 00:18:30.867 { 00:18:30.867 "method": "bdev_enable_histogram", 00:18:30.867 "params": { 00:18:30.867 "name": "nvme0n1", 00:18:30.867 "enable": true 00:18:30.867 } 00:18:30.867 }, 00:18:30.867 { 00:18:30.867 "method": "bdev_wait_for_examine" 00:18:30.867 } 00:18:30.867 ] 00:18:30.867 }, 00:18:30.867 { 00:18:30.867 "subsystem": "nbd", 00:18:30.867 "config": [] 00:18:30.867 } 00:18:30.867 ] 00:18:30.867 }' 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 3419206 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419206 ']' 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419206 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419206 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419206' 00:18:30.867 killing process with pid 3419206 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419206 00:18:30.867 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.867 00:18:30.867 Latency(us) 00:18:30.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.867 =================================================================================================================== 00:18:30.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.867 12:57:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419206 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 3419178 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419178 ']' 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419178 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419178 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419178' 00:18:31.124 killing process with pid 3419178 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419178 00:18:31.124 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419178 00:18:31.382 12:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:31.382 12:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.382 12:57:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:31.382 "subsystems": [ 00:18:31.382 { 00:18:31.382 "subsystem": "keyring", 00:18:31.382 "config": [ 00:18:31.382 { 00:18:31.382 "method": "keyring_file_add_key", 00:18:31.382 "params": { 00:18:31.382 "name": "key0", 00:18:31.382 "path": "/tmp/tmp.wJH5AmeQHC" 00:18:31.382 } 00:18:31.382 } 00:18:31.382 ] 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "subsystem": "iobuf", 00:18:31.382 "config": [ 00:18:31.382 { 00:18:31.382 "method": "iobuf_set_options", 00:18:31.382 "params": { 00:18:31.382 "small_pool_count": 8192, 00:18:31.382 "large_pool_count": 1024, 00:18:31.382 "small_bufsize": 8192, 00:18:31.382 "large_bufsize": 135168 00:18:31.382 } 00:18:31.382 } 00:18:31.382 ] 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "subsystem": "sock", 00:18:31.382 "config": [ 00:18:31.382 { 00:18:31.382 "method": "sock_set_default_impl", 00:18:31.382 "params": { 00:18:31.382 "impl_name": "posix" 00:18:31.382 } 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "method": "sock_impl_set_options", 00:18:31.382 "params": { 00:18:31.382 "impl_name": "ssl", 00:18:31.382 "recv_buf_size": 4096, 00:18:31.382 "send_buf_size": 4096, 00:18:31.382 "enable_recv_pipe": true, 00:18:31.382 "enable_quickack": false, 00:18:31.382 "enable_placement_id": 0, 00:18:31.382 "enable_zerocopy_send_server": true, 00:18:31.382 "enable_zerocopy_send_client": false, 00:18:31.382 "zerocopy_threshold": 0, 00:18:31.382 "tls_version": 0, 00:18:31.382 "enable_ktls": false 00:18:31.382 } 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "method": "sock_impl_set_options", 00:18:31.382 "params": { 00:18:31.382 "impl_name": "posix", 00:18:31.382 "recv_buf_size": 2097152, 00:18:31.382 "send_buf_size": 2097152, 00:18:31.382 "enable_recv_pipe": true, 00:18:31.382 "enable_quickack": false, 00:18:31.382 "enable_placement_id": 0, 00:18:31.382 "enable_zerocopy_send_server": true, 00:18:31.382 "enable_zerocopy_send_client": false, 00:18:31.382 "zerocopy_threshold": 0, 00:18:31.382 "tls_version": 0, 00:18:31.382 "enable_ktls": false 00:18:31.382 } 00:18:31.382 } 00:18:31.382 ] 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "subsystem": "vmd", 00:18:31.382 "config": [] 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "subsystem": "accel", 00:18:31.382 "config": [ 00:18:31.382 { 00:18:31.382 "method": "accel_set_options", 00:18:31.382 "params": { 00:18:31.382 "small_cache_size": 128, 00:18:31.382 "large_cache_size": 16, 00:18:31.382 "task_count": 2048, 00:18:31.382 "sequence_count": 2048, 00:18:31.382 "buf_count": 2048 00:18:31.382 } 00:18:31.382 } 00:18:31.382 ] 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "subsystem": "bdev", 00:18:31.382 "config": [ 00:18:31.382 { 00:18:31.382 "method": "bdev_set_options", 00:18:31.382 "params": { 00:18:31.382 "bdev_io_pool_size": 65535, 00:18:31.382 "bdev_io_cache_size": 256, 00:18:31.382 "bdev_auto_examine": true, 00:18:31.382 "iobuf_small_cache_size": 128, 00:18:31.382 "iobuf_large_cache_size": 16 00:18:31.382 } 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "method": "bdev_raid_set_options", 00:18:31.382 "params": { 00:18:31.382 "process_window_size_kb": 1024 00:18:31.382 } 00:18:31.382 }, 00:18:31.382 { 00:18:31.382 "method": "bdev_iscsi_set_options", 00:18:31.382 "params": { 00:18:31.382 "timeout_sec": 30 00:18:31.382 } 00:18:31.382 }, 00:18:31.383 { 00:18:31.383 "method": "bdev_nvme_set_options", 00:18:31.383 "params": { 00:18:31.383 "action_on_timeout": "none", 00:18:31.383 "timeout_us": 0, 00:18:31.383 "timeout_admin_us": 0, 00:18:31.383 "keep_alive_timeout_ms": 10000, 00:18:31.383 "arbitration_burst": 0, 00:18:31.383 "low_priority_weight": 0, 00:18:31.383 "medium_priority_weight": 0, 00:18:31.383 "high_priority_weight": 0, 00:18:31.383 "nvme_adminq_poll_period_us": 10000, 00:18:31.383 "nvme_ioq_poll_period_us": 0, 00:18:31.383 "io_queue_requests": 0, 00:18:31.383 "delay_cmd_submit": true, 00:18:31.383 "transport_retry_count": 4, 00:18:31.383 "bdev_retry_count": 3, 00:18:31.383 "transport_ack_timeout": 0, 00:18:31.383 "ctrlr_loss_timeout_sec": 0, 00:18:31.383 "reconnect_delay_sec": 0, 00:18:31.383 "fast_io_fail_timeout_sec": 0, 00:18:31.383 "disable_auto_failback": false, 00:18:31.383 "generate_uuids": false, 00:18:31.383 "transport_tos": 0, 00:18:31.383 "nvme_error_stat": false, 00:18:31.383 "rdma_srq_size": 0, 00:18:31.383 "io_path_stat": false, 00:18:31.383 "allow_accel_sequence": false, 00:18:31.383 "rdma_max_cq_size": 0, 00:18:31.383 "rdma_cm_event_timeout_ms": 0, 00:18:31.383 "dhchap_digests": [ 00:18:31.383 "sha256", 00:18:31.383 "sha384", 00:18:31.383 "sha512" 00:18:31.383 ], 00:18:31.383 "dhchap_dhgroups": [ 00:18:31.383 "null", 00:18:31.383 "ffdhe2048", 00:18:31.383 "ffdhe3072", 00:18:31.383 "ffdhe4096", 00:18:31.383 "ffdhe6144", 00:18:31.383 "ffdhe8192" 00:18:31.383 ] 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "bdev_nvme_set_hotplug", 00:18:31.383 "params": { 00:18:31.383 "period_us": 100000, 00:18:31.383 "enable": false 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "bdev_malloc_create", 00:18:31.383 "params": { 00:18:31.383 "name": "malloc0", 00:18:31.383 "num_blocks": 8192, 00:18:31.383 "block_size": 4096, 00:18:31.383 "physical_block_size": 4096, 00:18:31.383 "uuid": "16dcd425-2fd8-44ca-869f-a55ea3e0479a", 00:18:31.383 "optimal_io_boundary": 0 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "bdev_wait_for_examine" 00:18:31.383 } 00:18:31.383 ] 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "subsystem": "nbd", 00:18:31.383 "config": [] 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "subsystem": "scheduler", 00:18:31.383 "config": [ 00:18:31.383 { 00:18:31.383 "method": "framework_set_scheduler", 00:18:31.383 "params": { 00:18:31.383 "name": "static" 00:18:31.383 } 00:18:31.383 } 00:18:31.383 ] 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "subsystem": "nvmf", 00:18:31.383 "config": [ 00:18:31.383 { 00:18:31.383 "method": "nvmf_set_config", 00:18:31.383 "params": { 00:18:31.383 "discovery_filter": "match_any", 00:18:31.383 "admin_cmd_passthru": { 00:18:31.383 "identify_ctrlr": false 00:18:31.383 } 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_set_max_subsystems", 00:18:31.383 "params": { 00:18:31.383 "max_subsystems": 1024 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_set_crdt", 00:18:31.383 "params": { 00:18:31.383 "crdt1": 0, 00:18:31.383 "crdt2": 0, 00:18:31.383 "crdt3": 0 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_create_transport", 00:18:31.383 "params": { 00:18:31.383 "trtype": "TCP", 00:18:31.383 "max_queue_depth": 128, 00:18:31.383 "max_io_qpairs_per_ctrlr": 127, 00:18:31.383 "in_capsule_data_size": 4096, 00:18:31.383 "max_io_size": 131072, 00:18:31.383 "io_unit_size": 131072, 00:18:31.383 "max_aq_depth": 128, 00:18:31.383 "num_shared_buffers": 511, 00:18:31.383 "buf_cache_size": 4294967295, 00:18:31.383 "dif_insert_or_strip": false, 00:18:31.383 "zcopy": false, 00:18:31.383 "c2h_success": false, 00:18:31.383 "sock_priority": 0, 00:18:31.383 "abort_timeout_sec": 1, 00:18:31.383 "ack_timeout": 0, 00:18:31.383 "data_wr_pool_size": 0 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_create_subsystem", 00:18:31.383 "params": { 00:18:31.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.383 "allow_any_host": false, 00:18:31.383 "serial_number": "00000000000000000000", 00:18:31.383 "model_number": "SPDK bdev Controller", 00:18:31.383 "max_namespaces": 32, 00:18:31.383 "min_cntlid": 1, 00:18:31.383 "max_cntlid": 65519, 00:18:31.383 "ana_reporting": false 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_subsystem_add_host", 00:18:31.383 "params": { 00:18:31.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.383 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.383 "psk": "key0" 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_subsystem_add_ns", 00:18:31.383 "params": { 00:18:31.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.383 "namespace": { 00:18:31.383 "nsid": 1, 00:18:31.383 "bdev_name": "malloc0", 00:18:31.383 "nguid": "16DCD4252FD844CA869FA55EA3E0479A", 00:18:31.383 "uuid": "16dcd425-2fd8-44ca-869f-a55ea3e0479a", 00:18:31.383 "no_auto_visible": false 00:18:31.383 } 00:18:31.383 } 00:18:31.383 }, 00:18:31.383 { 00:18:31.383 "method": "nvmf_subsystem_add_listener", 00:18:31.383 "params": { 00:18:31.383 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.383 "listen_address": { 00:18:31.383 "trtype": "TCP", 00:18:31.383 "adrfam": "IPv4", 00:18:31.383 "traddr": "10.0.0.2", 00:18:31.383 "trsvcid": "4420" 00:18:31.383 }, 00:18:31.383 "secure_channel": true 00:18:31.383 } 00:18:31.383 } 00:18:31.383 ] 00:18:31.383 } 00:18:31.383 ] 00:18:31.383 }' 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3419617 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3419617 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419617 ']' 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.383 12:57:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.383 [2024-07-15 12:57:49.507925] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:31.383 [2024-07-15 12:57:49.508018] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.383 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.383 [2024-07-15 12:57:49.571107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.642 [2024-07-15 12:57:49.676860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.642 [2024-07-15 12:57:49.676917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.642 [2024-07-15 12:57:49.676945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.642 [2024-07-15 12:57:49.676957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.642 [2024-07-15 12:57:49.676966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.642 [2024-07-15 12:57:49.677038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.901 [2024-07-15 12:57:49.914765] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.901 [2024-07-15 12:57:49.946807] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.901 [2024-07-15 12:57:49.954939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=3419770 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 3419770 /var/tmp/bdevperf.sock 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3419770 ']' 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:18:32.467 "subsystems": [ 00:18:32.467 { 00:18:32.467 "subsystem": "keyring", 00:18:32.467 "config": [ 00:18:32.467 { 00:18:32.467 "method": "keyring_file_add_key", 00:18:32.467 "params": { 00:18:32.467 "name": "key0", 00:18:32.467 "path": "/tmp/tmp.wJH5AmeQHC" 00:18:32.467 } 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "iobuf", 00:18:32.467 "config": [ 00:18:32.467 { 00:18:32.467 "method": "iobuf_set_options", 00:18:32.467 "params": { 00:18:32.467 "small_pool_count": 8192, 00:18:32.467 "large_pool_count": 1024, 00:18:32.467 "small_bufsize": 8192, 00:18:32.467 "large_bufsize": 135168 00:18:32.467 } 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "sock", 00:18:32.467 "config": [ 00:18:32.467 { 00:18:32.467 "method": "sock_set_default_impl", 00:18:32.467 "params": { 00:18:32.467 "impl_name": "posix" 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "sock_impl_set_options", 00:18:32.467 "params": { 00:18:32.467 "impl_name": "ssl", 00:18:32.467 "recv_buf_size": 4096, 00:18:32.467 "send_buf_size": 4096, 00:18:32.467 "enable_recv_pipe": true, 00:18:32.467 "enable_quickack": false, 00:18:32.467 "enable_placement_id": 0, 00:18:32.467 "enable_zerocopy_send_server": true, 00:18:32.467 "enable_zerocopy_send_client": false, 00:18:32.467 "zerocopy_threshold": 0, 00:18:32.467 "tls_version": 0, 00:18:32.467 "enable_ktls": false 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "sock_impl_set_options", 00:18:32.467 "params": { 00:18:32.467 "impl_name": "posix", 00:18:32.467 "recv_buf_size": 2097152, 00:18:32.467 "send_buf_size": 2097152, 00:18:32.467 "enable_recv_pipe": true, 00:18:32.467 "enable_quickack": false, 00:18:32.467 "enable_placement_id": 0, 00:18:32.467 "enable_zerocopy_send_server": true, 00:18:32.467 "enable_zerocopy_send_client": false, 00:18:32.467 "zerocopy_threshold": 0, 00:18:32.467 "tls_version": 0, 00:18:32.467 "enable_ktls": false 00:18:32.467 } 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "vmd", 00:18:32.467 "config": [] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "accel", 00:18:32.467 "config": [ 00:18:32.467 { 00:18:32.467 "method": "accel_set_options", 00:18:32.467 "params": { 00:18:32.467 "small_cache_size": 128, 00:18:32.467 "large_cache_size": 16, 00:18:32.467 "task_count": 2048, 00:18:32.467 "sequence_count": 2048, 00:18:32.467 "buf_count": 2048 00:18:32.467 } 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "bdev", 00:18:32.467 "config": [ 00:18:32.467 { 00:18:32.467 "method": "bdev_set_options", 00:18:32.467 "params": { 00:18:32.467 "bdev_io_pool_size": 65535, 00:18:32.467 "bdev_io_cache_size": 256, 00:18:32.467 "bdev_auto_examine": true, 00:18:32.467 "iobuf_small_cache_size": 128, 00:18:32.467 "iobuf_large_cache_size": 16 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_raid_set_options", 00:18:32.467 "params": { 00:18:32.467 "process_window_size_kb": 1024 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_iscsi_set_options", 00:18:32.467 "params": { 00:18:32.467 "timeout_sec": 30 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_nvme_set_options", 00:18:32.467 "params": { 00:18:32.467 "action_on_timeout": "none", 00:18:32.467 "timeout_us": 0, 00:18:32.467 "timeout_admin_us": 0, 00:18:32.467 "keep_alive_timeout_ms": 10000, 00:18:32.467 "arbitration_burst": 0, 00:18:32.467 "low_priority_weight": 0, 00:18:32.467 "medium_priority_weight": 0, 00:18:32.467 "high_priority_weight": 0, 00:18:32.467 "nvme_adminq_poll_period_us": 10000, 00:18:32.467 "nvme_ioq_poll_period_us": 0, 00:18:32.467 "io_queue_requests": 512, 00:18:32.467 "delay_cmd_submit": true, 00:18:32.467 "transport_retry_count": 4, 00:18:32.467 "bdev_retry_count": 3, 00:18:32.467 "transport_ack_timeout": 0, 00:18:32.467 "ctrlr_loss_timeout_sec": 0, 00:18:32.467 "reconnect_delay_sec": 0, 00:18:32.467 "fast_io_fail_timeout_sec": 0, 00:18:32.467 "disable_auto_failback": false, 00:18:32.467 "generate_uuids": false, 00:18:32.467 "transport_tos": 0, 00:18:32.467 "nvme_error_stat": false, 00:18:32.467 "rdma_srq_size": 0, 00:18:32.467 "io_path_stat": false, 00:18:32.467 "allow_accel_sequence": false, 00:18:32.467 "rdma_max_cq_size": 0, 00:18:32.467 "rdma_cm_event_timeout_ms": 0, 00:18:32.467 "dhchap_digests": [ 00:18:32.467 "sha256", 00:18:32.467 "sha384", 00:18:32.467 "sha512" 00:18:32.467 ], 00:18:32.467 "dhchap_dhgroups": [ 00:18:32.467 "null", 00:18:32.467 "ffdhe2048", 00:18:32.467 "ffdhe3072", 00:18:32.467 "ffdhe4096", 00:18:32.467 "ffdhe6144", 00:18:32.467 "ffdhe8192" 00:18:32.467 ] 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_nvme_attach_controller", 00:18:32.467 "params": { 00:18:32.467 "name": "nvme0", 00:18:32.467 "trtype": "TCP", 00:18:32.467 "adrfam": "IPv4", 00:18:32.467 "traddr": "10.0.0.2", 00:18:32.467 "trsvcid": "4420", 00:18:32.467 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.467 "prchk_reftag": false, 00:18:32.467 "prchk_guard": false, 00:18:32.467 "ctrlr_loss_timeout_sec": 0, 00:18:32.467 "reconnect_delay_sec": 0, 00:18:32.467 "fast_io_fail_timeout_sec": 0, 00:18:32.467 "psk": "key0", 00:18:32.467 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.467 "hdgst": false, 00:18:32.467 "ddgst": false 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_nvme_set_hotplug", 00:18:32.467 "params": { 00:18:32.467 "period_us": 100000, 00:18:32.467 "enable": false 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_enable_histogram", 00:18:32.467 "params": { 00:18:32.467 "name": "nvme0n1", 00:18:32.467 "enable": true 00:18:32.467 } 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "method": "bdev_wait_for_examine" 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }, 00:18:32.467 { 00:18:32.467 "subsystem": "nbd", 00:18:32.467 "config": [] 00:18:32.467 } 00:18:32.467 ] 00:18:32.467 }' 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.467 12:57:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.467 [2024-07-15 12:57:50.540968] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:32.467 [2024-07-15 12:57:50.541047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419770 ] 00:18:32.467 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.467 [2024-07-15 12:57:50.599058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.727 [2024-07-15 12:57:50.709619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.727 [2024-07-15 12:57:50.891347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.661 12:57:51 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.919 Running I/O for 1 seconds... 00:18:34.852 00:18:34.852 Latency(us) 00:18:34.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.852 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:34.852 Verification LBA range: start 0x0 length 0x2000 00:18:34.852 nvme0n1 : 1.02 3573.81 13.96 0.00 0.00 35479.30 5873.97 41166.32 00:18:34.852 =================================================================================================================== 00:18:34.852 Total : 3573.81 13.96 0.00 0.00 35479.30 5873.97 41166.32 00:18:34.852 0 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:34.852 12:57:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:34.853 nvmf_trace.0 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 3419770 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419770 ']' 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419770 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.853 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419770 00:18:35.111 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.111 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.111 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419770' 00:18:35.111 killing process with pid 3419770 00:18:35.111 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419770 00:18:35.111 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.111 00:18:35.111 Latency(us) 00:18:35.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.111 =================================================================================================================== 00:18:35.111 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.111 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419770 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.369 rmmod nvme_tcp 00:18:35.369 rmmod nvme_fabrics 00:18:35.369 rmmod nvme_keyring 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3419617 ']' 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3419617 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3419617 ']' 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3419617 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419617 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419617' 00:18:35.369 killing process with pid 3419617 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3419617 00:18:35.369 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3419617 00:18:35.627 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.627 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.627 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.628 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.628 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.628 12:57:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.628 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.628 12:57:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.155 12:57:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.155 12:57:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Eq79br0DLs /tmp/tmp.xaH23tfEhu /tmp/tmp.wJH5AmeQHC 00:18:38.155 00:18:38.155 real 1m20.235s 00:18:38.155 user 2m7.872s 00:18:38.155 sys 0m28.644s 00:18:38.155 12:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.155 12:57:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.155 ************************************ 00:18:38.155 END TEST nvmf_tls 00:18:38.155 ************************************ 00:18:38.155 12:57:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:38.155 12:57:55 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:38.155 12:57:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.155 12:57:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.155 12:57:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:38.155 ************************************ 00:18:38.155 START TEST nvmf_fips 00:18:38.155 ************************************ 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:38.155 * Looking for test storage... 00:18:38.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:38.155 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:38.156 Error setting digest 00:18:38.156 002257677F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:38.156 002257677F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.156 12:57:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.057 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:40.058 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:40.058 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:40.058 Found net devices under 0000:84:00.0: cvl_0_0 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:40.058 Found net devices under 0000:84:00.1: cvl_0_1 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:18:40.058 00:18:40.058 --- 10.0.0.2 ping statistics --- 00:18:40.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.058 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:18:40.058 00:18:40.058 --- 10.0.0.1 ping statistics --- 00:18:40.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.058 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3422138 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3422138 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3422138 ']' 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.058 12:57:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.317 [2024-07-15 12:57:58.283460] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:40.317 [2024-07-15 12:57:58.283565] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.317 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.317 [2024-07-15 12:57:58.349131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.317 [2024-07-15 12:57:58.460646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.317 [2024-07-15 12:57:58.460700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.317 [2024-07-15 12:57:58.460728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.317 [2024-07-15 12:57:58.460747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.317 [2024-07-15 12:57:58.460758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.317 [2024-07-15 12:57:58.460798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.317 [2024-07-15 12:57:59.455330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.317 [2024-07-15 12:57:59.471304] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.317 [2024-07-15 12:57:59.471506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.317 [2024-07-15 12:57:59.501089] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.317 malloc0 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3422303 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3422303 /var/tmp/bdevperf.sock 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3422303 ']' 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.317 12:57:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:41.575 [2024-07-15 12:57:59.588448] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:18:41.575 [2024-07-15 12:57:59.588517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422303 ] 00:18:41.575 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.575 [2024-07-15 12:57:59.647569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.575 [2024-07-15 12:57:59.761239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.542 12:58:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.543 12:58:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:42.543 12:58:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:42.802 [2024-07-15 12:58:00.802070] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.802 [2024-07-15 12:58:00.802222] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:42.802 TLSTESTn1 00:18:42.802 12:58:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.802 Running I/O for 10 seconds... 00:18:55.023 00:18:55.023 Latency(us) 00:18:55.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.023 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.023 Verification LBA range: start 0x0 length 0x2000 00:18:55.023 TLSTESTn1 : 10.03 3461.50 13.52 0.00 0.00 36905.70 9029.40 40001.23 00:18:55.023 =================================================================================================================== 00:18:55.023 Total : 3461.50 13.52 0.00 0.00 36905.70 9029.40 40001.23 00:18:55.023 0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:55.023 nvmf_trace.0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3422303 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3422303 ']' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3422303 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3422303 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3422303' 00:18:55.023 killing process with pid 3422303 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3422303 00:18:55.023 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.023 00:18:55.023 Latency(us) 00:18:55.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.023 =================================================================================================================== 00:18:55.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.023 [2024-07-15 12:58:11.165957] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3422303 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:55.023 rmmod nvme_tcp 00:18:55.023 rmmod nvme_fabrics 00:18:55.023 rmmod nvme_keyring 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:55.023 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3422138 ']' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3422138 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3422138 ']' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3422138 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3422138 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3422138' 00:18:55.024 killing process with pid 3422138 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3422138 00:18:55.024 [2024-07-15 12:58:11.507498] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3422138 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.024 12:58:11 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.961 12:58:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:55.961 12:58:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:55.961 00:18:55.961 real 0m18.039s 00:18:55.961 user 0m22.455s 00:18:55.961 sys 0m7.166s 00:18:55.961 12:58:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:55.961 12:58:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.961 ************************************ 00:18:55.961 END TEST nvmf_fips 00:18:55.961 ************************************ 00:18:55.961 12:58:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:55.961 12:58:13 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:55.961 12:58:13 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:55.961 12:58:13 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:18:55.961 12:58:13 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:18:55.961 12:58:13 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.961 12:58:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.865 12:58:15 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:57.866 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:57.866 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:57.866 Found net devices under 0000:84:00.0: cvl_0_0 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:57.866 Found net devices under 0000:84:00.1: cvl_0_1 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:18:57.866 12:58:15 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:57.866 12:58:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:57.866 12:58:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.866 12:58:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:57.866 ************************************ 00:18:57.866 START TEST nvmf_perf_adq 00:18:57.866 ************************************ 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:57.866 * Looking for test storage... 00:18:57.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.866 12:58:15 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:00.394 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:00.394 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:00.394 Found net devices under 0000:84:00.0: cvl_0_0 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:00.394 Found net devices under 0000:84:00.1: cvl_0_1 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:00.394 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:00.652 12:58:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:02.554 12:58:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:07.856 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:07.857 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:07.857 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:07.857 Found net devices under 0000:84:00.0: cvl_0_0 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:07.857 Found net devices under 0000:84:00.1: cvl_0_1 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:07.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:07.857 00:19:07.857 --- 10.0.0.2 ping statistics --- 00:19:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.857 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:19:07.857 00:19:07.857 --- 10.0.0.1 ping statistics --- 00:19:07.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.857 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3428218 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3428218 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3428218 ']' 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.857 12:58:25 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 [2024-07-15 12:58:25.958563] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:07.857 [2024-07-15 12:58:25.958644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.857 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.857 [2024-07-15 12:58:26.019307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.115 [2024-07-15 12:58:26.123231] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.115 [2024-07-15 12:58:26.123300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.116 [2024-07-15 12:58:26.123314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.116 [2024-07-15 12:58:26.123339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.116 [2024-07-15 12:58:26.123349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.116 [2024-07-15 12:58:26.123430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.116 [2024-07-15 12:58:26.123550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.116 [2024-07-15 12:58:26.123552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.116 [2024-07-15 12:58:26.123490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.116 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.374 [2024-07-15 12:58:26.403349] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.374 Malloc1 00:19:08.374 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.375 [2024-07-15 12:58:26.454262] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3428253 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:08.375 12:58:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:08.375 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:10.279 "tick_rate": 2700000000, 00:19:10.279 "poll_groups": [ 00:19:10.279 { 00:19:10.279 "name": "nvmf_tgt_poll_group_000", 00:19:10.279 "admin_qpairs": 1, 00:19:10.279 "io_qpairs": 1, 00:19:10.279 "current_admin_qpairs": 1, 00:19:10.279 "current_io_qpairs": 1, 00:19:10.279 "pending_bdev_io": 0, 00:19:10.279 "completed_nvme_io": 20660, 00:19:10.279 "transports": [ 00:19:10.279 { 00:19:10.279 "trtype": "TCP" 00:19:10.279 } 00:19:10.279 ] 00:19:10.279 }, 00:19:10.279 { 00:19:10.279 "name": "nvmf_tgt_poll_group_001", 00:19:10.279 "admin_qpairs": 0, 00:19:10.279 "io_qpairs": 1, 00:19:10.279 "current_admin_qpairs": 0, 00:19:10.279 "current_io_qpairs": 1, 00:19:10.279 "pending_bdev_io": 0, 00:19:10.279 "completed_nvme_io": 20916, 00:19:10.279 "transports": [ 00:19:10.279 { 00:19:10.279 "trtype": "TCP" 00:19:10.279 } 00:19:10.279 ] 00:19:10.279 }, 00:19:10.279 { 00:19:10.279 "name": "nvmf_tgt_poll_group_002", 00:19:10.279 "admin_qpairs": 0, 00:19:10.279 "io_qpairs": 1, 00:19:10.279 "current_admin_qpairs": 0, 00:19:10.279 "current_io_qpairs": 1, 00:19:10.279 "pending_bdev_io": 0, 00:19:10.279 "completed_nvme_io": 21129, 00:19:10.279 "transports": [ 00:19:10.279 { 00:19:10.279 "trtype": "TCP" 00:19:10.279 } 00:19:10.279 ] 00:19:10.279 }, 00:19:10.279 { 00:19:10.279 "name": "nvmf_tgt_poll_group_003", 00:19:10.279 "admin_qpairs": 0, 00:19:10.279 "io_qpairs": 1, 00:19:10.279 "current_admin_qpairs": 0, 00:19:10.279 "current_io_qpairs": 1, 00:19:10.279 "pending_bdev_io": 0, 00:19:10.279 "completed_nvme_io": 20289, 00:19:10.279 "transports": [ 00:19:10.279 { 00:19:10.279 "trtype": "TCP" 00:19:10.279 } 00:19:10.279 ] 00:19:10.279 } 00:19:10.279 ] 00:19:10.279 }' 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:10.279 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:10.537 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:10.537 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:10.537 12:58:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3428253 00:19:18.656 Initializing NVMe Controllers 00:19:18.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:18.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:18.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:18.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:18.656 Initialization complete. Launching workers. 00:19:18.656 ======================================================== 00:19:18.656 Latency(us) 00:19:18.656 Device Information : IOPS MiB/s Average min max 00:19:18.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10651.96 41.61 6009.85 2814.30 8920.30 00:19:18.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10894.76 42.56 5874.71 2712.41 10065.54 00:19:18.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11020.86 43.05 5808.39 2598.68 8443.49 00:19:18.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10792.76 42.16 5929.77 2261.26 8664.98 00:19:18.656 ======================================================== 00:19:18.656 Total : 43360.34 169.38 5904.76 2261.26 10065.54 00:19:18.656 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.656 rmmod nvme_tcp 00:19:18.656 rmmod nvme_fabrics 00:19:18.656 rmmod nvme_keyring 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3428218 ']' 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3428218 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3428218 ']' 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3428218 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3428218 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3428218' 00:19:18.656 killing process with pid 3428218 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3428218 00:19:18.656 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3428218 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.914 12:58:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.820 12:58:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:20.820 12:58:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:20.820 12:58:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:21.757 12:58:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:23.712 12:58:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:28.981 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:28.982 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:28.982 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:28.982 Found net devices under 0000:84:00.0: cvl_0_0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:28.982 Found net devices under 0000:84:00.1: cvl_0_1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:28.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:19:28.982 00:19:28.982 --- 10.0.0.2 ping statistics --- 00:19:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.982 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:28.982 00:19:28.982 --- 10.0.0.1 ping statistics --- 00:19:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.982 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:28.982 net.core.busy_poll = 1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:28.982 net.core.busy_read = 1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3430870 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3430870 00:19:28.982 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3430870 ']' 00:19:28.983 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.983 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.983 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.983 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.983 12:58:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.983 [2024-07-15 12:58:46.935897] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:28.983 [2024-07-15 12:58:46.936002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.983 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.983 [2024-07-15 12:58:47.001100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.983 [2024-07-15 12:58:47.120074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.983 [2024-07-15 12:58:47.120137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.983 [2024-07-15 12:58:47.120151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.983 [2024-07-15 12:58:47.120163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.983 [2024-07-15 12:58:47.120173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.983 [2024-07-15 12:58:47.120325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.983 [2024-07-15 12:58:47.120506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.983 [2024-07-15 12:58:47.120919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.983 [2024-07-15 12:58:47.120924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.983 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.240 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:29.240 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:29.240 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.240 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.240 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 [2024-07-15 12:58:47.308302] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 Malloc1 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.241 [2024-07-15 12:58:47.358863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3431022 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:29.241 12:58:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:29.241 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:31.766 "tick_rate": 2700000000, 00:19:31.766 "poll_groups": [ 00:19:31.766 { 00:19:31.766 "name": "nvmf_tgt_poll_group_000", 00:19:31.766 "admin_qpairs": 1, 00:19:31.766 "io_qpairs": 1, 00:19:31.766 "current_admin_qpairs": 1, 00:19:31.766 "current_io_qpairs": 1, 00:19:31.766 "pending_bdev_io": 0, 00:19:31.766 "completed_nvme_io": 25441, 00:19:31.766 "transports": [ 00:19:31.766 { 00:19:31.766 "trtype": "TCP" 00:19:31.766 } 00:19:31.766 ] 00:19:31.766 }, 00:19:31.766 { 00:19:31.766 "name": "nvmf_tgt_poll_group_001", 00:19:31.766 "admin_qpairs": 0, 00:19:31.766 "io_qpairs": 3, 00:19:31.766 "current_admin_qpairs": 0, 00:19:31.766 "current_io_qpairs": 3, 00:19:31.766 "pending_bdev_io": 0, 00:19:31.766 "completed_nvme_io": 26727, 00:19:31.766 "transports": [ 00:19:31.766 { 00:19:31.766 "trtype": "TCP" 00:19:31.766 } 00:19:31.766 ] 00:19:31.766 }, 00:19:31.766 { 00:19:31.766 "name": "nvmf_tgt_poll_group_002", 00:19:31.766 "admin_qpairs": 0, 00:19:31.766 "io_qpairs": 0, 00:19:31.766 "current_admin_qpairs": 0, 00:19:31.766 "current_io_qpairs": 0, 00:19:31.766 "pending_bdev_io": 0, 00:19:31.766 "completed_nvme_io": 0, 00:19:31.766 "transports": [ 00:19:31.766 { 00:19:31.766 "trtype": "TCP" 00:19:31.766 } 00:19:31.766 ] 00:19:31.766 }, 00:19:31.766 { 00:19:31.766 "name": "nvmf_tgt_poll_group_003", 00:19:31.766 "admin_qpairs": 0, 00:19:31.766 "io_qpairs": 0, 00:19:31.766 "current_admin_qpairs": 0, 00:19:31.766 "current_io_qpairs": 0, 00:19:31.766 "pending_bdev_io": 0, 00:19:31.766 "completed_nvme_io": 0, 00:19:31.766 "transports": [ 00:19:31.766 { 00:19:31.766 "trtype": "TCP" 00:19:31.766 } 00:19:31.766 ] 00:19:31.766 } 00:19:31.766 ] 00:19:31.766 }' 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:31.766 12:58:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3431022 00:19:39.868 Initializing NVMe Controllers 00:19:39.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:39.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:39.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:39.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:39.868 Initialization complete. Launching workers. 00:19:39.868 ======================================================== 00:19:39.868 Latency(us) 00:19:39.868 Device Information : IOPS MiB/s Average min max 00:19:39.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4531.50 17.70 14126.30 1860.39 60821.58 00:19:39.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4745.80 18.54 13527.64 2108.49 61085.64 00:19:39.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13194.90 51.54 4850.70 1650.04 6812.23 00:19:39.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4744.40 18.53 13535.25 1756.55 62419.19 00:19:39.868 ======================================================== 00:19:39.868 Total : 27216.60 106.31 9421.97 1650.04 62419.19 00:19:39.868 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.868 rmmod nvme_tcp 00:19:39.868 rmmod nvme_fabrics 00:19:39.868 rmmod nvme_keyring 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3430870 ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3430870 ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3430870' 00:19:39.868 killing process with pid 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3430870 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.868 12:58:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.157 12:59:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.157 12:59:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:43.157 00:19:43.157 real 0m45.024s 00:19:43.157 user 2m40.023s 00:19:43.157 sys 0m9.820s 00:19:43.157 12:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.157 12:59:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.157 ************************************ 00:19:43.157 END TEST nvmf_perf_adq 00:19:43.157 ************************************ 00:19:43.157 12:59:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:43.157 12:59:00 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:43.157 12:59:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:43.157 12:59:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.157 12:59:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.157 ************************************ 00:19:43.157 START TEST nvmf_shutdown 00:19:43.157 ************************************ 00:19:43.157 12:59:00 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:43.157 * Looking for test storage... 00:19:43.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:43.157 ************************************ 00:19:43.157 START TEST nvmf_shutdown_tc1 00:19:43.157 ************************************ 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:43.157 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.158 12:59:01 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:45.062 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:45.062 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:45.062 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:45.063 Found net devices under 0000:84:00.0: cvl_0_0 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:45.063 Found net devices under 0000:84:00.1: cvl_0_1 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.063 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.321 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:45.322 00:19:45.322 --- 10.0.0.2 ping statistics --- 00:19:45.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.322 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:19:45.322 00:19:45.322 --- 10.0.0.1 ping statistics --- 00:19:45.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.322 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3434343 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3434343 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3434343 ']' 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.322 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.322 [2024-07-15 12:59:03.362232] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:45.322 [2024-07-15 12:59:03.362334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.322 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.322 [2024-07-15 12:59:03.432148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.580 [2024-07-15 12:59:03.545256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.580 [2024-07-15 12:59:03.545308] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.580 [2024-07-15 12:59:03.545338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.580 [2024-07-15 12:59:03.545350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.580 [2024-07-15 12:59:03.545361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.580 [2024-07-15 12:59:03.545458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.580 [2024-07-15 12:59:03.545801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.580 [2024-07-15 12:59:03.545852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.580 [2024-07-15 12:59:03.545856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.580 [2024-07-15 12:59:03.709685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.580 12:59:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.580 Malloc1 00:19:45.838 [2024-07-15 12:59:03.799449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.838 Malloc2 00:19:45.838 Malloc3 00:19:45.838 Malloc4 00:19:45.838 Malloc5 00:19:45.838 Malloc6 00:19:46.097 Malloc7 00:19:46.097 Malloc8 00:19:46.097 Malloc9 00:19:46.097 Malloc10 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3434515 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3434515 /var/tmp/bdevperf.sock 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3434515 ']' 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.097 { 00:19:46.097 "params": { 00:19:46.097 "name": "Nvme$subsystem", 00:19:46.097 "trtype": "$TEST_TRANSPORT", 00:19:46.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.097 "adrfam": "ipv4", 00:19:46.097 "trsvcid": "$NVMF_PORT", 00:19:46.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.097 "hdgst": ${hdgst:-false}, 00:19:46.097 "ddgst": ${ddgst:-false} 00:19:46.097 }, 00:19:46.097 "method": "bdev_nvme_attach_controller" 00:19:46.097 } 00:19:46.097 EOF 00:19:46.097 )") 00:19:46.097 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.098 { 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme$subsystem", 00:19:46.098 "trtype": "$TEST_TRANSPORT", 00:19:46.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "$NVMF_PORT", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.098 "hdgst": ${hdgst:-false}, 00:19:46.098 "ddgst": ${ddgst:-false} 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 } 00:19:46.098 EOF 00:19:46.098 )") 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.098 { 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme$subsystem", 00:19:46.098 "trtype": "$TEST_TRANSPORT", 00:19:46.098 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "$NVMF_PORT", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.098 "hdgst": ${hdgst:-false}, 00:19:46.098 "ddgst": ${ddgst:-false} 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 } 00:19:46.098 EOF 00:19:46.098 )") 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:46.098 12:59:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme1", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme2", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme3", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme4", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme5", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme6", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme7", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme8", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme9", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 },{ 00:19:46.098 "params": { 00:19:46.098 "name": "Nvme10", 00:19:46.098 "trtype": "tcp", 00:19:46.098 "traddr": "10.0.0.2", 00:19:46.098 "adrfam": "ipv4", 00:19:46.098 "trsvcid": "4420", 00:19:46.098 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:46.098 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:46.098 "hdgst": false, 00:19:46.098 "ddgst": false 00:19:46.098 }, 00:19:46.098 "method": "bdev_nvme_attach_controller" 00:19:46.098 }' 00:19:46.098 [2024-07-15 12:59:04.299924] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:46.098 [2024-07-15 12:59:04.300001] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:46.356 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.356 [2024-07-15 12:59:04.366019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.356 [2024-07-15 12:59:04.477021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3434515 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:48.255 12:59:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:49.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3434515 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3434343 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.187 { 00:19:49.187 "params": { 00:19:49.187 "name": "Nvme$subsystem", 00:19:49.187 "trtype": "$TEST_TRANSPORT", 00:19:49.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.187 "adrfam": "ipv4", 00:19:49.187 "trsvcid": "$NVMF_PORT", 00:19:49.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.187 "hdgst": ${hdgst:-false}, 00:19:49.187 "ddgst": ${ddgst:-false} 00:19:49.187 }, 00:19:49.187 "method": "bdev_nvme_attach_controller" 00:19:49.187 } 00:19:49.187 EOF 00:19:49.187 )") 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.187 { 00:19:49.187 "params": { 00:19:49.187 "name": "Nvme$subsystem", 00:19:49.187 "trtype": "$TEST_TRANSPORT", 00:19:49.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.187 "adrfam": "ipv4", 00:19:49.187 "trsvcid": "$NVMF_PORT", 00:19:49.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.187 "hdgst": ${hdgst:-false}, 00:19:49.187 "ddgst": ${ddgst:-false} 00:19:49.187 }, 00:19:49.187 "method": "bdev_nvme_attach_controller" 00:19:49.187 } 00:19:49.187 EOF 00:19:49.187 )") 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.187 { 00:19:49.187 "params": { 00:19:49.187 "name": "Nvme$subsystem", 00:19:49.187 "trtype": "$TEST_TRANSPORT", 00:19:49.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.187 "adrfam": "ipv4", 00:19:49.187 "trsvcid": "$NVMF_PORT", 00:19:49.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.187 "hdgst": ${hdgst:-false}, 00:19:49.187 "ddgst": ${ddgst:-false} 00:19:49.187 }, 00:19:49.187 "method": "bdev_nvme_attach_controller" 00:19:49.187 } 00:19:49.187 EOF 00:19:49.187 )") 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.187 { 00:19:49.187 "params": { 00:19:49.187 "name": "Nvme$subsystem", 00:19:49.187 "trtype": "$TEST_TRANSPORT", 00:19:49.187 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.187 "adrfam": "ipv4", 00:19:49.187 "trsvcid": "$NVMF_PORT", 00:19:49.187 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.187 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.187 "hdgst": ${hdgst:-false}, 00:19:49.187 "ddgst": ${ddgst:-false} 00:19:49.187 }, 00:19:49.187 "method": "bdev_nvme_attach_controller" 00:19:49.187 } 00:19:49.187 EOF 00:19:49.187 )") 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.187 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.188 { 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme$subsystem", 00:19:49.188 "trtype": "$TEST_TRANSPORT", 00:19:49.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "$NVMF_PORT", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.188 "hdgst": ${hdgst:-false}, 00:19:49.188 "ddgst": ${ddgst:-false} 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 } 00:19:49.188 EOF 00:19:49.188 )") 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:49.188 12:59:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme1", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme2", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme3", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme4", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme5", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme6", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme7", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme8", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme9", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.188 },{ 00:19:49.188 "params": { 00:19:49.188 "name": "Nvme10", 00:19:49.188 "trtype": "tcp", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "adrfam": "ipv4", 00:19:49.188 "trsvcid": "4420", 00:19:49.188 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.188 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.188 "hdgst": false, 00:19:49.188 "ddgst": false 00:19:49.188 }, 00:19:49.188 "method": "bdev_nvme_attach_controller" 00:19:49.189 }' 00:19:49.189 [2024-07-15 12:59:07.334013] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:49.189 [2024-07-15 12:59:07.334120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434816 ] 00:19:49.189 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.446 [2024-07-15 12:59:07.401603] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.446 [2024-07-15 12:59:07.514883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.820 Running I/O for 1 seconds... 00:19:51.754 00:19:51.754 Latency(us) 00:19:51.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme1n1 : 1.15 223.49 13.97 0.00 0.00 283643.45 36894.34 242337.56 00:19:51.754 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme2n1 : 1.10 239.26 14.95 0.00 0.00 258956.69 6650.69 228356.55 00:19:51.754 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme3n1 : 1.15 223.25 13.95 0.00 0.00 273692.07 20680.25 262532.36 00:19:51.754 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme4n1 : 1.12 229.29 14.33 0.00 0.00 262197.29 18155.90 267192.70 00:19:51.754 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme5n1 : 1.13 230.63 14.41 0.00 0.00 254184.83 6747.78 259425.47 00:19:51.754 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme6n1 : 1.15 222.49 13.91 0.00 0.00 262058.86 20971.52 259425.47 00:19:51.754 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme7n1 : 1.17 273.07 17.07 0.00 0.00 209212.00 12718.84 250104.79 00:19:51.754 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme8n1 : 1.15 221.66 13.85 0.00 0.00 254133.29 18350.08 260978.92 00:19:51.754 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme9n1 : 1.16 220.56 13.79 0.00 0.00 251218.68 20486.07 271853.04 00:19:51.754 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:51.754 Verification LBA range: start 0x0 length 0x400 00:19:51.754 Nvme10n1 : 1.17 230.72 14.42 0.00 0.00 235188.84 1711.22 288940.94 00:19:51.754 =================================================================================================================== 00:19:51.754 Total : 2314.42 144.65 0.00 0.00 253256.43 1711.22 288940.94 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.012 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.012 rmmod nvme_tcp 00:19:52.012 rmmod nvme_fabrics 00:19:52.012 rmmod nvme_keyring 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3434343 ']' 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3434343 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3434343 ']' 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3434343 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3434343 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3434343' 00:19:52.270 killing process with pid 3434343 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3434343 00:19:52.270 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3434343 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.836 12:59:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.741 00:19:54.741 real 0m11.792s 00:19:54.741 user 0m33.593s 00:19:54.741 sys 0m3.249s 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:54.741 ************************************ 00:19:54.741 END TEST nvmf_shutdown_tc1 00:19:54.741 ************************************ 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:54.741 ************************************ 00:19:54.741 START TEST nvmf_shutdown_tc2 00:19:54.741 ************************************ 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:54.741 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:54.742 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:54.742 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:54.742 Found net devices under 0000:84:00.0: cvl_0_0 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:54.742 Found net devices under 0000:84:00.1: cvl_0_1 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.742 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.001 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.001 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.001 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.001 12:59:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:19:55.001 00:19:55.001 --- 10.0.0.2 ping statistics --- 00:19:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.001 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:55.001 00:19:55.001 --- 10.0.0.1 ping statistics --- 00:19:55.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.001 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3435580 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3435580 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3435580 ']' 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.001 12:59:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.001 [2024-07-15 12:59:13.128057] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:55.001 [2024-07-15 12:59:13.128150] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.001 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.001 [2024-07-15 12:59:13.195116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.258 [2024-07-15 12:59:13.300390] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.258 [2024-07-15 12:59:13.300446] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.258 [2024-07-15 12:59:13.300474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.258 [2024-07-15 12:59:13.300484] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.258 [2024-07-15 12:59:13.300494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.258 [2024-07-15 12:59:13.300581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.258 [2024-07-15 12:59:13.300981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.258 [2024-07-15 12:59:13.301047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.258 [2024-07-15 12:59:13.301050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.191 [2024-07-15 12:59:14.137911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.191 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.191 Malloc1 00:19:56.191 [2024-07-15 12:59:14.226173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.191 Malloc2 00:19:56.191 Malloc3 00:19:56.191 Malloc4 00:19:56.191 Malloc5 00:19:56.449 Malloc6 00:19:56.449 Malloc7 00:19:56.449 Malloc8 00:19:56.449 Malloc9 00:19:56.449 Malloc10 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3435888 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3435888 /var/tmp/bdevperf.sock 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3435888 ']' 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.708 { 00:19:56.708 "params": { 00:19:56.708 "name": "Nvme$subsystem", 00:19:56.708 "trtype": "$TEST_TRANSPORT", 00:19:56.708 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.708 "adrfam": "ipv4", 00:19:56.708 "trsvcid": "$NVMF_PORT", 00:19:56.708 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.708 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.708 "hdgst": ${hdgst:-false}, 00:19:56.708 "ddgst": ${ddgst:-false} 00:19:56.708 }, 00:19:56.708 "method": "bdev_nvme_attach_controller" 00:19:56.708 } 00:19:56.708 EOF 00:19:56.708 )") 00:19:56.708 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.709 { 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme$subsystem", 00:19:56.709 "trtype": "$TEST_TRANSPORT", 00:19:56.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "$NVMF_PORT", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.709 "hdgst": ${hdgst:-false}, 00:19:56.709 "ddgst": ${ddgst:-false} 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 } 00:19:56.709 EOF 00:19:56.709 )") 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.709 { 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme$subsystem", 00:19:56.709 "trtype": "$TEST_TRANSPORT", 00:19:56.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "$NVMF_PORT", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.709 "hdgst": ${hdgst:-false}, 00:19:56.709 "ddgst": ${ddgst:-false} 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 } 00:19:56.709 EOF 00:19:56.709 )") 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.709 { 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme$subsystem", 00:19:56.709 "trtype": "$TEST_TRANSPORT", 00:19:56.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "$NVMF_PORT", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.709 "hdgst": ${hdgst:-false}, 00:19:56.709 "ddgst": ${ddgst:-false} 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 } 00:19:56.709 EOF 00:19:56.709 )") 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:56.709 12:59:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme1", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme2", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme3", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme4", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme5", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme6", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme7", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme8", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme9", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 },{ 00:19:56.709 "params": { 00:19:56.709 "name": "Nvme10", 00:19:56.709 "trtype": "tcp", 00:19:56.709 "traddr": "10.0.0.2", 00:19:56.709 "adrfam": "ipv4", 00:19:56.709 "trsvcid": "4420", 00:19:56.709 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:56.709 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:56.709 "hdgst": false, 00:19:56.709 "ddgst": false 00:19:56.709 }, 00:19:56.709 "method": "bdev_nvme_attach_controller" 00:19:56.709 }' 00:19:56.709 [2024-07-15 12:59:14.721056] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:19:56.709 [2024-07-15 12:59:14.721129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435888 ] 00:19:56.709 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.709 [2024-07-15 12:59:14.785395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.709 [2024-07-15 12:59:14.896399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.296 Running I/O for 10 seconds... 00:19:59.296 12:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.296 12:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:59.296 12:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:59.296 12:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.296 12:59:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:59.296 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3435888 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3435888 ']' 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3435888 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3435888 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3435888' 00:19:59.554 killing process with pid 3435888 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3435888 00:19:59.554 12:59:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3435888 00:19:59.811 Received shutdown signal, test time was about 0.933049 seconds 00:19:59.811 00:19:59.811 Latency(us) 00:19:59.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.811 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme1n1 : 0.92 208.54 13.03 0.00 0.00 303378.08 20874.43 285834.05 00:19:59.811 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme2n1 : 0.93 275.55 17.22 0.00 0.00 224717.56 22524.97 267192.70 00:19:59.811 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme3n1 : 0.93 274.62 17.16 0.00 0.00 221047.66 20680.25 248551.35 00:19:59.811 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme4n1 : 0.90 233.82 14.61 0.00 0.00 247994.80 12621.75 250104.79 00:19:59.811 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme5n1 : 0.88 217.10 13.57 0.00 0.00 266044.81 18932.62 259425.47 00:19:59.811 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme6n1 : 0.91 212.08 13.26 0.00 0.00 267589.40 20874.43 262532.36 00:19:59.811 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme7n1 : 0.90 214.02 13.38 0.00 0.00 258014.88 24758.04 242337.56 00:19:59.811 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme8n1 : 0.90 214.37 13.40 0.00 0.00 252631.67 41166.32 225249.66 00:19:59.811 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme9n1 : 0.91 210.37 13.15 0.00 0.00 252414.99 19709.35 268746.15 00:19:59.811 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.811 Verification LBA range: start 0x0 length 0x400 00:19:59.811 Nvme10n1 : 0.92 207.79 12.99 0.00 0.00 250394.30 20486.07 292047.83 00:19:59.811 =================================================================================================================== 00:19:59.811 Total : 2268.26 141.77 0.00 0.00 252410.58 12621.75 292047.83 00:20:00.069 12:59:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.999 rmmod nvme_tcp 00:20:00.999 rmmod nvme_fabrics 00:20:00.999 rmmod nvme_keyring 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3435580 ']' 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3435580 ']' 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3435580' 00:20:00.999 killing process with pid 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3435580 00:20:00.999 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3435580 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:01.566 12:59:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.099 00:20:04.099 real 0m8.825s 00:20:04.099 user 0m28.532s 00:20:04.099 sys 0m1.567s 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:04.099 ************************************ 00:20:04.099 END TEST nvmf_shutdown_tc2 00:20:04.099 ************************************ 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:04.099 ************************************ 00:20:04.099 START TEST nvmf_shutdown_tc3 00:20:04.099 ************************************ 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:04.099 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:04.099 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:04.099 Found net devices under 0000:84:00.0: cvl_0_0 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:04.099 Found net devices under 0000:84:00.1: cvl_0_1 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.099 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:20:04.100 00:20:04.100 --- 10.0.0.2 ping statistics --- 00:20:04.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.100 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:04.100 00:20:04.100 --- 10.0.0.1 ping statistics --- 00:20:04.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.100 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3436807 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3436807 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3436807 ']' 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.100 12:59:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.100 [2024-07-15 12:59:22.015761] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:04.100 [2024-07-15 12:59:22.015844] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.100 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.100 [2024-07-15 12:59:22.077831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.100 [2024-07-15 12:59:22.179365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.100 [2024-07-15 12:59:22.179420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.100 [2024-07-15 12:59:22.179448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.100 [2024-07-15 12:59:22.179459] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.100 [2024-07-15 12:59:22.179468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.100 [2024-07-15 12:59:22.179529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.100 [2024-07-15 12:59:22.179590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.100 [2024-07-15 12:59:22.179658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:04.100 [2024-07-15 12:59:22.179661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.357 [2024-07-15 12:59:22.332480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.357 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.358 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.358 Malloc1 00:20:04.358 [2024-07-15 12:59:22.422041] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.358 Malloc2 00:20:04.358 Malloc3 00:20:04.358 Malloc4 00:20:04.615 Malloc5 00:20:04.615 Malloc6 00:20:04.615 Malloc7 00:20:04.615 Malloc8 00:20:04.615 Malloc9 00:20:04.873 Malloc10 00:20:04.873 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.873 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:04.873 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3436986 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3436986 /var/tmp/bdevperf.sock 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3436986 ']' 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.874 { 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme$subsystem", 00:20:04.874 "trtype": "$TEST_TRANSPORT", 00:20:04.874 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.874 "adrfam": "ipv4", 00:20:04.874 "trsvcid": "$NVMF_PORT", 00:20:04.874 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.874 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.874 "hdgst": ${hdgst:-false}, 00:20:04.874 "ddgst": ${ddgst:-false} 00:20:04.874 }, 00:20:04.874 "method": "bdev_nvme_attach_controller" 00:20:04.874 } 00:20:04.874 EOF 00:20:04.874 )") 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:04.874 12:59:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:04.874 "params": { 00:20:04.874 "name": "Nvme1", 00:20:04.874 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme2", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme3", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme4", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme5", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme6", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme7", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme8", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme9", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 },{ 00:20:04.875 "params": { 00:20:04.875 "name": "Nvme10", 00:20:04.875 "trtype": "tcp", 00:20:04.875 "traddr": "10.0.0.2", 00:20:04.875 "adrfam": "ipv4", 00:20:04.875 "trsvcid": "4420", 00:20:04.875 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:04.875 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:04.875 "hdgst": false, 00:20:04.875 "ddgst": false 00:20:04.875 }, 00:20:04.875 "method": "bdev_nvme_attach_controller" 00:20:04.875 }' 00:20:04.875 [2024-07-15 12:59:22.940918] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:04.875 [2024-07-15 12:59:22.940993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3436986 ] 00:20:04.875 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.875 [2024-07-15 12:59:23.004987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.133 [2024-07-15 12:59:23.116357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.504 Running I/O for 10 seconds... 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:06.762 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.019 12:59:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.019 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=73 00:20:07.019 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 73 -ge 100 ']' 00:20:07.020 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=137 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 137 -ge 100 ']' 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3436807 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3436807 ']' 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3436807 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3436807 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3436807' 00:20:07.295 killing process with pid 3436807 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3436807 00:20:07.295 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3436807 00:20:07.295 [2024-07-15 12:59:25.325695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.325986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.326990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.295 [2024-07-15 12:59:25.327225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.327247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3ca80 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.328965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3f480 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330763] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.330993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3cf20 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.331745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.296 [2024-07-15 12:59:25.331867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.331880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e200 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.332988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 12:59:25.333226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 12:59:25.333279] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:12the state(5) to be set 00:20:07.296 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:12the state(5) to be set 00:20:07.296 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:12the state(5) to be set 00:20:07.296 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.296 [2024-07-15 12:59:25.333561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.296 [2024-07-15 12:59:25.333565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.296 [2024-07-15 12:59:25.333574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:1[2024-07-15 12:59:25.333612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 12:59:25.333626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:1the state(5) to be set 00:20:07.297 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.297 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with [2024-07-15 12:59:25.333791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.297 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333831] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 12:59:25.333857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d3c0 is same with the state(5) to be set 00:20:07.297 [2024-07-15 12:59:25.333907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.333980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.333996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.334973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.334992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.297 [2024-07-15 12:59:25.335226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.297 [2024-07-15 12:59:25.335240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.298 [2024-07-15 12:59:25.335254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.298 [2024-07-15 12:59:25.335313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335488] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335524] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335621] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335680] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335873] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9f8c20 was disconnected and freed. reset controller. 00:20:07.298 [2024-07-15 12:59:25.335885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.335996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.336157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3d880 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.337996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:07.298 [2024-07-15 12:59:25.338283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with [2024-07-15 12:59:25.338309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bthe state(5) to be set 00:20:07.298 ad file descriptor 00:20:07.298 [2024-07-15 12:59:25.338330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.338771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3dd20 is same with the state(5) to be set 00:20:07.298 [2024-07-15 12:59:25.340033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.299 [2024-07-15 12:59:25.340071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40690 with addr=10.0.0.2, port=4420 00:20:07.299 [2024-07-15 12:59:25.340089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340162] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.299 [2024-07-15 12:59:25.340239] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.299 [2024-07-15 12:59:25.340351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340387] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340599] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bad file descriptor 00:20:07.299 [2024-07-15 12:59:25.340977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.340990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341102] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.299 [2024-07-15 12:59:25.341117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e1e0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:07.299 [2024-07-15 12:59:25.341659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:07.299 [2024-07-15 12:59:25.341693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:07.299 [2024-07-15 12:59:25.341765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.341810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.341839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.341867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.341894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bceb0 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.341942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.341977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.341991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4c90 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.342132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57e200 (9): Bad file descriptor 00:20:07.299 [2024-07-15 12:59:25.342181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5c980 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.342342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4850 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.342503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.299 [2024-07-15 12:59:25.342605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.299 [2024-07-15 12:59:25.342617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7880 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.342767] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.299 [2024-07-15 12:59:25.342979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343350] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.299 [2024-07-15 12:59:25.343373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.299 [2024-07-15 12:59:25.343413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3e680 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.343949] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.300 [2024-07-15 12:59:25.344701] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.300 [2024-07-15 12:59:25.345151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.345504] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.300 [2024-07-15 12:59:25.349187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:07.300 [2024-07-15 12:59:25.349505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.300 [2024-07-15 12:59:25.349533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40690 with addr=10.0.0.2, port=4420 00:20:07.300 [2024-07-15 12:59:25.349549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.349628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.349694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:07.300 [2024-07-15 12:59:25.349711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:07.300 [2024-07-15 12:59:25.349748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:07.300 [2024-07-15 12:59:25.349836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.300 [2024-07-15 12:59:25.351700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bceb0 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.351756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b4c90 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.351821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.351842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.351858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.351872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.351887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.351901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.351915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.351927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.351940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x492610 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.351992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.352027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.352055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.352104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.300 [2024-07-15 12:59:25.352143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4fb00 is same with the state(5) to be set 00:20:07.300 [2024-07-15 12:59:25.352216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5c980 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.352246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4850 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.352274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c7880 (9): Bad file descriptor 00:20:07.300 [2024-07-15 12:59:25.352453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.352974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.352988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.353004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.353043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.353059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.300 [2024-07-15 12:59:25.353071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.300 [2024-07-15 12:59:25.353086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.353982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.353995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.301 [2024-07-15 12:59:25.354414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.301 [2024-07-15 12:59:25.354427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa004f0 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.355773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.301 [2024-07-15 12:59:25.356092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.301 [2024-07-15 12:59:25.356119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57e200 with addr=10.0.0.2, port=4420 00:20:07.301 [2024-07-15 12:59:25.356134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e200 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.356511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57e200 (9): Bad file descriptor 00:20:07.301 [2024-07-15 12:59:25.356642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.301 [2024-07-15 12:59:25.356662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.301 [2024-07-15 12:59:25.356675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.301 [2024-07-15 12:59:25.356844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.301 [2024-07-15 12:59:25.359501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:07.301 [2024-07-15 12:59:25.359814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.301 [2024-07-15 12:59:25.359842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40690 with addr=10.0.0.2, port=4420 00:20:07.301 [2024-07-15 12:59:25.359863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.359972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bad file descriptor 00:20:07.301 [2024-07-15 12:59:25.360143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:07.301 [2024-07-15 12:59:25.360164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:07.301 [2024-07-15 12:59:25.360177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:07.301 [2024-07-15 12:59:25.360305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.301 [2024-07-15 12:59:25.361454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361511] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x492610 (9): B[2024-07-15 12:59:25.361787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with ad file descriptor 00:20:07.301 the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4fb00 (9): Bad file descriptor 00:20:07.301 [2024-07-15 12:59:25.361828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361852] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.301 [2024-07-15 12:59:25.361912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.361924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.361936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3eb40 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.362782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:12the state(5) to be set 00:20:07.302 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362830] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.362923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.302 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12[2024-07-15 12:59:25.362944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.362967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.362980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.362989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.362994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.302 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.363093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12the state(5) to be set 00:20:07.302 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.363138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.302 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.363282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:12the state(5) to be set 00:20:07.302 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:12[2024-07-15 12:59:25.363341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.363376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12the state(5) to be set 00:20:07.302 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-07-15 12:59:25.363443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with [2024-07-15 12:59:25.363493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:07.302 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:12[2024-07-15 12:59:25.363574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.302 [2024-07-15 12:59:25.363597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.302 [2024-07-15 12:59:25.363609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.302 [2024-07-15 12:59:25.363619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.363986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.364191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3efe0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.376094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.376700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.376717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb340f0 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.376849] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb340f0 was disconnected and freed. reset controller. 00:20:07.303 [2024-07-15 12:59:25.377148] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377177] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377202] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.303 [2024-07-15 12:59:25.377284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.303 [2024-07-15 12:59:25.377313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.303 [2024-07-15 12:59:25.377342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:07.303 [2024-07-15 12:59:25.377370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4f920 is same with the state(5) to be set 00:20:07.303 [2024-07-15 12:59:25.377413] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377435] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377454] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.303 [2024-07-15 12:59:25.377536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.377974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.377988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.303 [2024-07-15 12:59:25.378366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.303 [2024-07-15 12:59:25.378380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.378986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.378999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.379504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.379518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa01790 is same with the state(5) to be set 00:20:07.304 [2024-07-15 12:59:25.380823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.380846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.380867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.380883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.380899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.380918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.380935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.380949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.380965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.380978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.380994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.304 [2024-07-15 12:59:25.381975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.304 [2024-07-15 12:59:25.381989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.382618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.382633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.392262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.392279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa02c20 is same with the state(5) to be set 00:20:07.305 [2024-07-15 12:59:25.393696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.393983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.393996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.305 [2024-07-15 12:59:25.394728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.305 [2024-07-15 12:59:25.394751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.394973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.394989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.395691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.395705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97fe70 is same with the state(5) to be set 00:20:07.306 [2024-07-15 12:59:25.396965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.396987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.397974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.397990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.306 [2024-07-15 12:59:25.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.306 [2024-07-15 12:59:25.398480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.398924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.398938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98d1c0 is same with the state(5) to be set 00:20:07.307 [2024-07-15 12:59:25.400200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.400977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.400991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.401977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.401990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.402006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.402023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.402040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.402054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.307 [2024-07-15 12:59:25.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.307 [2024-07-15 12:59:25.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.402099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.402113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.402129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.402156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb25930 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.404696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:07.308 [2024-07-15 12:59:25.404733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:07.308 [2024-07-15 12:59:25.404837] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.404862] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.404883] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.404907] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.404926] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.404951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4f920 (9): Bad file descriptor 00:20:07.308 [2024-07-15 12:59:25.404981] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.405001] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.308 [2024-07-15 12:59:25.405127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.405974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.405990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.406972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.406988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.407002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.407017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.407031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.407047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.407060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.407076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.308 [2024-07-15 12:59:25.407089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.308 [2024-07-15 12:59:25.407104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb32c60 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.408423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:07.308 [2024-07-15 12:59:25.408452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:07.308 [2024-07-15 12:59:25.408469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:07.308 [2024-07-15 12:59:25.408486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:07.308 [2024-07-15 12:59:25.408813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.408843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5c980 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.408860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5c980 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.409013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.409038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bceb0 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.409053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bceb0 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.410730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.308 [2024-07-15 12:59:25.410766] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:07.308 [2024-07-15 12:59:25.410785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:07.308 [2024-07-15 12:59:25.411030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.411057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b4c90 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.411073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4c90 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.411198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.411222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c7880 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.411238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7880 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.411494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.411518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4850 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.411533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4850 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.411693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.411717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4fb00 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.411731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4fb00 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.411759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5c980 (9): Bad file descriptor 00:20:07.308 [2024-07-15 12:59:25.411786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bceb0 (9): Bad file descriptor 00:20:07.308 [2024-07-15 12:59:25.412193] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:07.308 [2024-07-15 12:59:25.412411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.308 [2024-07-15 12:59:25.412438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57e200 with addr=10.0.0.2, port=4420 00:20:07.308 [2024-07-15 12:59:25.412454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e200 is same with the state(5) to be set 00:20:07.308 [2024-07-15 12:59:25.412615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.309 [2024-07-15 12:59:25.412639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40690 with addr=10.0.0.2, port=4420 00:20:07.309 [2024-07-15 12:59:25.412654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.309 [2024-07-15 12:59:25.412796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.309 [2024-07-15 12:59:25.412827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x492610 with addr=10.0.0.2, port=4420 00:20:07.309 [2024-07-15 12:59:25.412842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x492610 is same with the state(5) to be set 00:20:07.309 [2024-07-15 12:59:25.412860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b4c90 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.412879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c7880 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.412897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4850 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.412914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4fb00 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.412931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.412944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.412962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:07.309 [2024-07-15 12:59:25.412984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.412998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57e200 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.413182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.413200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x492610 (9): Bad file descriptor 00:20:07.309 [2024-07-15 12:59:25.413216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:07.309 [2024-07-15 12:59:25.413567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:07.309 [2024-07-15 12:59:25.413579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:07.309 [2024-07-15 12:59:25.413612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.413641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.309 [2024-07-15 12:59:25.414798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.414855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.414887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.414918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.414948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.414978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.414992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.415984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.309 [2024-07-15 12:59:25.416506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.309 [2024-07-15 12:59:25.416520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.310 [2024-07-15 12:59:25.416785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.310 [2024-07-15 12:59:25.416801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb35470 is same with the state(5) to be set 00:20:07.310 task offset: 16384 on job bdev=Nvme10n1 fails 00:20:07.310 00:20:07.310 Latency(us) 00:20:07.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:07.310 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme1n1 ended in about 0.82 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme1n1 : 0.82 163.40 10.21 78.04 0.00 261965.02 35146.71 234570.33 00:20:07.310 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme2n1 ended in about 0.85 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme2n1 : 0.85 151.45 9.47 75.72 0.00 272535.45 19126.80 256318.58 00:20:07.310 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme3n1 ended in about 0.86 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme3n1 : 0.86 149.20 9.33 74.60 0.00 270890.48 24660.95 273406.48 00:20:07.310 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme4n1 ended in about 0.86 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme4n1 : 0.86 148.61 9.29 74.31 0.00 265965.67 21942.42 282727.16 00:20:07.310 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme5n1 ended in about 0.86 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme5n1 : 0.86 153.85 9.62 74.03 0.00 254526.10 18641.35 259425.47 00:20:07.310 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme6n1 ended in about 0.87 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme6n1 : 0.87 147.52 9.22 73.76 0.00 256438.93 20680.25 256318.58 00:20:07.310 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme7n1 ended in about 0.87 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme7n1 : 0.87 146.68 9.17 73.34 0.00 252148.75 21262.79 260978.92 00:20:07.310 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme8n1 ended in about 0.87 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme8n1 : 0.87 147.31 9.21 73.65 0.00 245035.87 20486.07 260978.92 00:20:07.310 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme9n1 ended in about 0.88 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme9n1 : 0.88 145.07 9.07 72.53 0.00 243715.03 21845.33 273406.48 00:20:07.310 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:07.310 Job: Nvme10n1 ended in about 0.80 seconds with error 00:20:07.310 Verification LBA range: start 0x0 length 0x400 00:20:07.310 Nvme10n1 : 0.80 159.44 9.97 79.72 0.00 211465.99 6213.78 290494.39 00:20:07.310 =================================================================================================================== 00:20:07.310 Total : 1512.52 94.53 749.71 0.00 253497.87 6213.78 290494.39 00:20:07.310 [2024-07-15 12:59:25.448389] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:07.310 [2024-07-15 12:59:25.448485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.449156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4f920 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.449179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4f920 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.449232] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449255] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449273] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449312] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449330] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:07.310 [2024-07-15 12:59:25.449651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:07.310 [2024-07-15 12:59:25.449862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4f920 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.450211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:07.310 [2024-07-15 12:59:25.450439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.450467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bceb0 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.450484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bceb0 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.450672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.450697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5c980 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.450713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5c980 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.450949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.450975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb4fb00 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.450991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4fb00 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.451229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.451254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9d4850 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.451269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d4850 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.451369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.451393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9c7880 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.451408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9c7880 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.451593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.451625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9b4c90 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.451640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4c90 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.451655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.451668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.451685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:07.310 [2024-07-15 12:59:25.451732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:07.310 [2024-07-15 12:59:25.451762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:07.310 [2024-07-15 12:59:25.451790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.452019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.452044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x492610 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.452060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x492610 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.452078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bceb0 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5c980 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4fb00 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d4850 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c7880 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9b4c90 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.452468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa40690 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.452484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa40690 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.452687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.310 [2024-07-15 12:59:25.452712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x57e200 with addr=10.0.0.2, port=4420 00:20:07.310 [2024-07-15 12:59:25.452728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x57e200 is same with the state(5) to be set 00:20:07.310 [2024-07-15 12:59:25.452753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x492610 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.452772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.452797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:07.310 [2024-07-15 12:59:25.452815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.452842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:07.310 [2024-07-15 12:59:25.452857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.452883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:07.310 [2024-07-15 12:59:25.452898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.452924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:07.310 [2024-07-15 12:59:25.452939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.452966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:07.310 [2024-07-15 12:59:25.452981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.452994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.453012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:07.310 [2024-07-15 12:59:25.453052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa40690 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.453150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x57e200 (9): Bad file descriptor 00:20:07.310 [2024-07-15 12:59:25.453165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.453177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.453190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:07.310 [2024-07-15 12:59:25.453242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.453275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.453288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:07.310 [2024-07-15 12:59:25.453304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.310 [2024-07-15 12:59:25.453318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.310 [2024-07-15 12:59:25.453331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.310 [2024-07-15 12:59:25.453369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.310 [2024-07-15 12:59:25.453387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.879 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:07.879 12:59:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3436986 00:20:08.817 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3436986) - No such process 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.817 rmmod nvme_tcp 00:20:08.817 rmmod nvme_fabrics 00:20:08.817 rmmod nvme_keyring 00:20:08.817 12:59:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.817 12:59:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.363 12:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:11.363 00:20:11.363 real 0m7.265s 00:20:11.363 user 0m17.203s 00:20:11.363 sys 0m1.386s 00:20:11.363 12:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.363 12:59:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:11.363 ************************************ 00:20:11.363 END TEST nvmf_shutdown_tc3 00:20:11.363 ************************************ 00:20:11.363 12:59:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:11.363 12:59:29 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:11.363 00:20:11.363 real 0m28.088s 00:20:11.363 user 1m19.405s 00:20:11.364 sys 0m6.346s 00:20:11.364 12:59:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.364 12:59:29 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:11.364 ************************************ 00:20:11.364 END TEST nvmf_shutdown 00:20:11.364 ************************************ 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:11.364 12:59:29 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.364 12:59:29 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.364 12:59:29 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:20:11.364 12:59:29 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:11.364 12:59:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.364 ************************************ 00:20:11.364 START TEST nvmf_multicontroller 00:20:11.364 ************************************ 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.364 * Looking for test storage... 00:20:11.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:11.364 12:59:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:13.267 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:13.267 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:13.267 Found net devices under 0000:84:00.0: cvl_0_0 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:13.267 Found net devices under 0000:84:00.1: cvl_0_1 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.267 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:20:13.526 00:20:13.526 --- 10.0.0.2 ping statistics --- 00:20:13.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.526 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:20:13.526 00:20:13.526 --- 10.0.0.1 ping statistics --- 00:20:13.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.526 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3439511 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3439511 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3439511 ']' 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.526 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.526 [2024-07-15 12:59:31.616593] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:13.526 [2024-07-15 12:59:31.616677] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.526 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.526 [2024-07-15 12:59:31.682939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:13.784 [2024-07-15 12:59:31.790456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.784 [2024-07-15 12:59:31.790525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.784 [2024-07-15 12:59:31.790538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.784 [2024-07-15 12:59:31.790549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.784 [2024-07-15 12:59:31.790572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.784 [2024-07-15 12:59:31.790665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.784 [2024-07-15 12:59:31.790729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.784 [2024-07-15 12:59:31.790732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 [2024-07-15 12:59:31.922572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 Malloc0 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 [2024-07-15 12:59:31.981764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.784 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.784 [2024-07-15 12:59:31.989639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.042 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.043 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.043 12:59:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 Malloc1 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3439546 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3439546 /var/tmp/bdevperf.sock 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3439546 ']' 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.043 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.301 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.301 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:14.301 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:14.301 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.301 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.562 NVMe0n1 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.562 1 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.562 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.563 request: 00:20:14.563 { 00:20:14.563 "name": "NVMe0", 00:20:14.563 "trtype": "tcp", 00:20:14.563 "traddr": "10.0.0.2", 00:20:14.563 "adrfam": "ipv4", 00:20:14.563 "trsvcid": "4420", 00:20:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.563 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:14.563 "hostaddr": "10.0.0.2", 00:20:14.563 "hostsvcid": "60000", 00:20:14.563 "prchk_reftag": false, 00:20:14.563 "prchk_guard": false, 00:20:14.563 "hdgst": false, 00:20:14.563 "ddgst": false, 00:20:14.563 "method": "bdev_nvme_attach_controller", 00:20:14.563 "req_id": 1 00:20:14.563 } 00:20:14.563 Got JSON-RPC error response 00:20:14.563 response: 00:20:14.563 { 00:20:14.563 "code": -114, 00:20:14.563 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:14.563 } 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.563 request: 00:20:14.563 { 00:20:14.563 "name": "NVMe0", 00:20:14.563 "trtype": "tcp", 00:20:14.563 "traddr": "10.0.0.2", 00:20:14.563 "adrfam": "ipv4", 00:20:14.563 "trsvcid": "4420", 00:20:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:14.563 "hostaddr": "10.0.0.2", 00:20:14.563 "hostsvcid": "60000", 00:20:14.563 "prchk_reftag": false, 00:20:14.563 "prchk_guard": false, 00:20:14.563 "hdgst": false, 00:20:14.563 "ddgst": false, 00:20:14.563 "method": "bdev_nvme_attach_controller", 00:20:14.563 "req_id": 1 00:20:14.563 } 00:20:14.563 Got JSON-RPC error response 00:20:14.563 response: 00:20:14.563 { 00:20:14.563 "code": -114, 00:20:14.563 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:14.563 } 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.563 request: 00:20:14.563 { 00:20:14.563 "name": "NVMe0", 00:20:14.563 "trtype": "tcp", 00:20:14.563 "traddr": "10.0.0.2", 00:20:14.563 "adrfam": "ipv4", 00:20:14.563 "trsvcid": "4420", 00:20:14.563 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.563 "hostaddr": "10.0.0.2", 00:20:14.563 "hostsvcid": "60000", 00:20:14.563 "prchk_reftag": false, 00:20:14.563 "prchk_guard": false, 00:20:14.563 "hdgst": false, 00:20:14.563 "ddgst": false, 00:20:14.563 "multipath": "disable", 00:20:14.563 "method": "bdev_nvme_attach_controller", 00:20:14.563 "req_id": 1 00:20:14.563 } 00:20:14.563 Got JSON-RPC error response 00:20:14.563 response: 00:20:14.563 { 00:20:14.563 "code": -114, 00:20:14.563 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:14.563 } 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:14.563 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 request: 00:20:14.564 { 00:20:14.564 "name": "NVMe0", 00:20:14.564 "trtype": "tcp", 00:20:14.564 "traddr": "10.0.0.2", 00:20:14.564 "adrfam": "ipv4", 00:20:14.564 "trsvcid": "4420", 00:20:14.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.564 "hostaddr": "10.0.0.2", 00:20:14.564 "hostsvcid": "60000", 00:20:14.564 "prchk_reftag": false, 00:20:14.564 "prchk_guard": false, 00:20:14.564 "hdgst": false, 00:20:14.564 "ddgst": false, 00:20:14.564 "multipath": "failover", 00:20:14.564 "method": "bdev_nvme_attach_controller", 00:20:14.564 "req_id": 1 00:20:14.564 } 00:20:14.564 Got JSON-RPC error response 00:20:14.564 response: 00:20:14.564 { 00:20:14.564 "code": -114, 00:20:14.564 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:14.564 } 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.564 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:14.824 12:59:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.758 0 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3439546 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3439546 ']' 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3439546 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.016 12:59:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439546 00:20:16.016 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:16.016 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:16.016 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439546' 00:20:16.016 killing process with pid 3439546 00:20:16.016 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3439546 00:20:16.016 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3439546 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:16.276 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:16.276 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:16.276 [2024-07-15 12:59:32.086391] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:16.276 [2024-07-15 12:59:32.086474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439546 ] 00:20:16.276 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.276 [2024-07-15 12:59:32.146837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.276 [2024-07-15 12:59:32.256419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.276 [2024-07-15 12:59:32.810754] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 7cde765e-7364-4511-916e-fe3799a0d1fe already exists 00:20:16.276 [2024-07-15 12:59:32.810798] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:7cde765e-7364-4511-916e-fe3799a0d1fe alias for bdev NVMe1n1 00:20:16.276 [2024-07-15 12:59:32.810813] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:16.276 Running I/O for 1 seconds... 00:20:16.276 00:20:16.276 Latency(us) 00:20:16.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.277 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:16.277 NVMe0n1 : 1.01 19298.28 75.38 0.00 0.00 6623.03 5121.52 12379.02 00:20:16.277 =================================================================================================================== 00:20:16.277 Total : 19298.28 75.38 0.00 0.00 6623.03 5121.52 12379.02 00:20:16.277 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.277 00:20:16.277 Latency(us) 00:20:16.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.277 =================================================================================================================== 00:20:16.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.277 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.277 rmmod nvme_tcp 00:20:16.277 rmmod nvme_fabrics 00:20:16.277 rmmod nvme_keyring 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3439511 ']' 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3439511 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3439511 ']' 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3439511 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439511 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439511' 00:20:16.277 killing process with pid 3439511 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3439511 00:20:16.277 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3439511 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:16.535 12:59:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.071 12:59:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.071 00:20:19.071 real 0m7.597s 00:20:19.071 user 0m11.663s 00:20:19.071 sys 0m2.366s 00:20:19.071 12:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.071 12:59:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.071 ************************************ 00:20:19.071 END TEST nvmf_multicontroller 00:20:19.071 ************************************ 00:20:19.071 12:59:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:19.071 12:59:36 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:19.071 12:59:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.071 12:59:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.071 12:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:19.071 ************************************ 00:20:19.071 START TEST nvmf_aer 00:20:19.071 ************************************ 00:20:19.071 12:59:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:19.071 * Looking for test storage... 00:20:19.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.072 12:59:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.012 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:21.013 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:21.013 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:21.013 Found net devices under 0000:84:00.0: cvl_0_0 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:21.013 Found net devices under 0000:84:00.1: cvl_0_1 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.013 12:59:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:21.013 00:20:21.013 --- 10.0.0.2 ping statistics --- 00:20:21.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.013 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:21.013 00:20:21.013 --- 10.0.0.1 ping statistics --- 00:20:21.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.013 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3441773 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3441773 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3441773 ']' 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.013 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.013 [2024-07-15 12:59:39.106785] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:21.013 [2024-07-15 12:59:39.106857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.013 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.013 [2024-07-15 12:59:39.173667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.272 [2024-07-15 12:59:39.282212] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.272 [2024-07-15 12:59:39.282279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.272 [2024-07-15 12:59:39.282307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.272 [2024-07-15 12:59:39.282318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.272 [2024-07-15 12:59:39.282327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.272 [2024-07-15 12:59:39.282413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.272 [2024-07-15 12:59:39.282452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.272 [2024-07-15 12:59:39.282863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.272 [2024-07-15 12:59:39.282869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.272 [2024-07-15 12:59:39.437452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.272 Malloc0 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.272 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.531 [2024-07-15 12:59:39.490527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.531 [ 00:20:21.531 { 00:20:21.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.531 "subtype": "Discovery", 00:20:21.531 "listen_addresses": [], 00:20:21.531 "allow_any_host": true, 00:20:21.531 "hosts": [] 00:20:21.531 }, 00:20:21.531 { 00:20:21.531 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.531 "subtype": "NVMe", 00:20:21.531 "listen_addresses": [ 00:20:21.531 { 00:20:21.531 "trtype": "TCP", 00:20:21.531 "adrfam": "IPv4", 00:20:21.531 "traddr": "10.0.0.2", 00:20:21.531 "trsvcid": "4420" 00:20:21.531 } 00:20:21.531 ], 00:20:21.531 "allow_any_host": true, 00:20:21.531 "hosts": [], 00:20:21.531 "serial_number": "SPDK00000000000001", 00:20:21.531 "model_number": "SPDK bdev Controller", 00:20:21.531 "max_namespaces": 2, 00:20:21.531 "min_cntlid": 1, 00:20:21.531 "max_cntlid": 65519, 00:20:21.531 "namespaces": [ 00:20:21.531 { 00:20:21.531 "nsid": 1, 00:20:21.531 "bdev_name": "Malloc0", 00:20:21.531 "name": "Malloc0", 00:20:21.531 "nguid": "D2CE81A23F8B4A21AE69399B20CEA7B0", 00:20:21.531 "uuid": "d2ce81a2-3f8b-4a21-ae69-399b20cea7b0" 00:20:21.531 } 00:20:21.531 ] 00:20:21.531 } 00:20:21.531 ] 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=3441912 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:21.531 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.531 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.789 Malloc1 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.789 Asynchronous Event Request test 00:20:21.789 Attaching to 10.0.0.2 00:20:21.789 Attached to 10.0.0.2 00:20:21.789 Registering asynchronous event callbacks... 00:20:21.789 Starting namespace attribute notice tests for all controllers... 00:20:21.789 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:21.789 aer_cb - Changed Namespace 00:20:21.789 Cleaning up... 00:20:21.789 [ 00:20:21.789 { 00:20:21.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.789 "subtype": "Discovery", 00:20:21.789 "listen_addresses": [], 00:20:21.789 "allow_any_host": true, 00:20:21.789 "hosts": [] 00:20:21.789 }, 00:20:21.789 { 00:20:21.789 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.789 "subtype": "NVMe", 00:20:21.789 "listen_addresses": [ 00:20:21.789 { 00:20:21.789 "trtype": "TCP", 00:20:21.789 "adrfam": "IPv4", 00:20:21.789 "traddr": "10.0.0.2", 00:20:21.789 "trsvcid": "4420" 00:20:21.789 } 00:20:21.789 ], 00:20:21.789 "allow_any_host": true, 00:20:21.789 "hosts": [], 00:20:21.789 "serial_number": "SPDK00000000000001", 00:20:21.789 "model_number": "SPDK bdev Controller", 00:20:21.789 "max_namespaces": 2, 00:20:21.789 "min_cntlid": 1, 00:20:21.789 "max_cntlid": 65519, 00:20:21.789 "namespaces": [ 00:20:21.789 { 00:20:21.789 "nsid": 1, 00:20:21.789 "bdev_name": "Malloc0", 00:20:21.789 "name": "Malloc0", 00:20:21.789 "nguid": "D2CE81A23F8B4A21AE69399B20CEA7B0", 00:20:21.789 "uuid": "d2ce81a2-3f8b-4a21-ae69-399b20cea7b0" 00:20:21.789 }, 00:20:21.789 { 00:20:21.789 "nsid": 2, 00:20:21.789 "bdev_name": "Malloc1", 00:20:21.789 "name": "Malloc1", 00:20:21.789 "nguid": "6995EBA933544FC88C8BBF4304CE2877", 00:20:21.789 "uuid": "6995eba9-3354-4fc8-8c8b-bf4304ce2877" 00:20:21.789 } 00:20:21.789 ] 00:20:21.789 } 00:20:21.789 ] 00:20:21.789 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 3441912 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.790 rmmod nvme_tcp 00:20:21.790 rmmod nvme_fabrics 00:20:21.790 rmmod nvme_keyring 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3441773 ']' 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3441773 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3441773 ']' 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3441773 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441773 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441773' 00:20:21.790 killing process with pid 3441773 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3441773 00:20:21.790 12:59:39 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3441773 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.049 12:59:40 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.052 12:59:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.052 00:20:24.052 real 0m5.454s 00:20:24.052 user 0m4.258s 00:20:24.052 sys 0m1.932s 00:20:24.311 12:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.311 12:59:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:24.311 ************************************ 00:20:24.311 END TEST nvmf_aer 00:20:24.311 ************************************ 00:20:24.311 12:59:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:24.311 12:59:42 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:24.311 12:59:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:24.311 12:59:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.311 12:59:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:24.311 ************************************ 00:20:24.311 START TEST nvmf_async_init 00:20:24.311 ************************************ 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:24.311 * Looking for test storage... 00:20:24.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.311 12:59:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a58a4327bd654606bad38e438cb80f6a 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.312 12:59:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.213 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.213 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.213 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:26.472 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:26.472 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:26.472 Found net devices under 0000:84:00.0: cvl_0_0 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:26.472 Found net devices under 0000:84:00.1: cvl_0_1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:20:26.472 00:20:26.472 --- 10.0.0.2 ping statistics --- 00:20:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.472 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:26.472 00:20:26.472 --- 10.0.0.1 ping statistics --- 00:20:26.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.472 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:26.472 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3443871 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3443871 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3443871 ']' 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:26.473 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.473 [2024-07-15 12:59:44.639316] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:26.473 [2024-07-15 12:59:44.639409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.473 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.731 [2024-07-15 12:59:44.704642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.731 [2024-07-15 12:59:44.809914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.731 [2024-07-15 12:59:44.809960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.731 [2024-07-15 12:59:44.809989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.731 [2024-07-15 12:59:44.810001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.731 [2024-07-15 12:59:44.810011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.731 [2024-07-15 12:59:44.810060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.731 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.731 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:26.731 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:26.731 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:26.731 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 [2024-07-15 12:59:44.954921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 null0 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a58a4327bd654606bad38e438cb80f6a 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:26.989 [2024-07-15 12:59:44.995183] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.989 12:59:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 nvme0n1 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 [ 00:20:27.247 { 00:20:27.247 "name": "nvme0n1", 00:20:27.247 "aliases": [ 00:20:27.247 "a58a4327-bd65-4606-bad3-8e438cb80f6a" 00:20:27.247 ], 00:20:27.247 "product_name": "NVMe disk", 00:20:27.247 "block_size": 512, 00:20:27.247 "num_blocks": 2097152, 00:20:27.247 "uuid": "a58a4327-bd65-4606-bad3-8e438cb80f6a", 00:20:27.247 "assigned_rate_limits": { 00:20:27.247 "rw_ios_per_sec": 0, 00:20:27.247 "rw_mbytes_per_sec": 0, 00:20:27.247 "r_mbytes_per_sec": 0, 00:20:27.247 "w_mbytes_per_sec": 0 00:20:27.247 }, 00:20:27.247 "claimed": false, 00:20:27.247 "zoned": false, 00:20:27.247 "supported_io_types": { 00:20:27.247 "read": true, 00:20:27.247 "write": true, 00:20:27.247 "unmap": false, 00:20:27.247 "flush": true, 00:20:27.247 "reset": true, 00:20:27.247 "nvme_admin": true, 00:20:27.247 "nvme_io": true, 00:20:27.247 "nvme_io_md": false, 00:20:27.247 "write_zeroes": true, 00:20:27.247 "zcopy": false, 00:20:27.247 "get_zone_info": false, 00:20:27.247 "zone_management": false, 00:20:27.247 "zone_append": false, 00:20:27.247 "compare": true, 00:20:27.247 "compare_and_write": true, 00:20:27.247 "abort": true, 00:20:27.247 "seek_hole": false, 00:20:27.247 "seek_data": false, 00:20:27.247 "copy": true, 00:20:27.247 "nvme_iov_md": false 00:20:27.247 }, 00:20:27.247 "memory_domains": [ 00:20:27.247 { 00:20:27.247 "dma_device_id": "system", 00:20:27.247 "dma_device_type": 1 00:20:27.247 } 00:20:27.247 ], 00:20:27.247 "driver_specific": { 00:20:27.247 "nvme": [ 00:20:27.247 { 00:20:27.247 "trid": { 00:20:27.247 "trtype": "TCP", 00:20:27.247 "adrfam": "IPv4", 00:20:27.247 "traddr": "10.0.0.2", 00:20:27.247 "trsvcid": "4420", 00:20:27.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:27.247 }, 00:20:27.247 "ctrlr_data": { 00:20:27.247 "cntlid": 1, 00:20:27.247 "vendor_id": "0x8086", 00:20:27.247 "model_number": "SPDK bdev Controller", 00:20:27.247 "serial_number": "00000000000000000000", 00:20:27.247 "firmware_revision": "24.09", 00:20:27.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.247 "oacs": { 00:20:27.247 "security": 0, 00:20:27.247 "format": 0, 00:20:27.247 "firmware": 0, 00:20:27.247 "ns_manage": 0 00:20:27.247 }, 00:20:27.247 "multi_ctrlr": true, 00:20:27.247 "ana_reporting": false 00:20:27.247 }, 00:20:27.247 "vs": { 00:20:27.247 "nvme_version": "1.3" 00:20:27.247 }, 00:20:27.247 "ns_data": { 00:20:27.247 "id": 1, 00:20:27.247 "can_share": true 00:20:27.247 } 00:20:27.247 } 00:20:27.247 ], 00:20:27.247 "mp_policy": "active_passive" 00:20:27.247 } 00:20:27.247 } 00:20:27.247 ] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 [2024-07-15 12:59:45.243855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:27.247 [2024-07-15 12:59:45.243951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1c5c0 (9): Bad file descriptor 00:20:27.247 [2024-07-15 12:59:45.375864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 [ 00:20:27.247 { 00:20:27.247 "name": "nvme0n1", 00:20:27.247 "aliases": [ 00:20:27.247 "a58a4327-bd65-4606-bad3-8e438cb80f6a" 00:20:27.247 ], 00:20:27.247 "product_name": "NVMe disk", 00:20:27.247 "block_size": 512, 00:20:27.247 "num_blocks": 2097152, 00:20:27.247 "uuid": "a58a4327-bd65-4606-bad3-8e438cb80f6a", 00:20:27.247 "assigned_rate_limits": { 00:20:27.247 "rw_ios_per_sec": 0, 00:20:27.247 "rw_mbytes_per_sec": 0, 00:20:27.247 "r_mbytes_per_sec": 0, 00:20:27.247 "w_mbytes_per_sec": 0 00:20:27.247 }, 00:20:27.247 "claimed": false, 00:20:27.247 "zoned": false, 00:20:27.247 "supported_io_types": { 00:20:27.247 "read": true, 00:20:27.247 "write": true, 00:20:27.247 "unmap": false, 00:20:27.247 "flush": true, 00:20:27.247 "reset": true, 00:20:27.247 "nvme_admin": true, 00:20:27.247 "nvme_io": true, 00:20:27.247 "nvme_io_md": false, 00:20:27.247 "write_zeroes": true, 00:20:27.247 "zcopy": false, 00:20:27.247 "get_zone_info": false, 00:20:27.247 "zone_management": false, 00:20:27.247 "zone_append": false, 00:20:27.247 "compare": true, 00:20:27.247 "compare_and_write": true, 00:20:27.247 "abort": true, 00:20:27.247 "seek_hole": false, 00:20:27.247 "seek_data": false, 00:20:27.247 "copy": true, 00:20:27.247 "nvme_iov_md": false 00:20:27.247 }, 00:20:27.247 "memory_domains": [ 00:20:27.247 { 00:20:27.247 "dma_device_id": "system", 00:20:27.247 "dma_device_type": 1 00:20:27.247 } 00:20:27.247 ], 00:20:27.247 "driver_specific": { 00:20:27.247 "nvme": [ 00:20:27.247 { 00:20:27.247 "trid": { 00:20:27.247 "trtype": "TCP", 00:20:27.247 "adrfam": "IPv4", 00:20:27.247 "traddr": "10.0.0.2", 00:20:27.247 "trsvcid": "4420", 00:20:27.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:27.247 }, 00:20:27.247 "ctrlr_data": { 00:20:27.247 "cntlid": 2, 00:20:27.247 "vendor_id": "0x8086", 00:20:27.247 "model_number": "SPDK bdev Controller", 00:20:27.247 "serial_number": "00000000000000000000", 00:20:27.247 "firmware_revision": "24.09", 00:20:27.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.247 "oacs": { 00:20:27.247 "security": 0, 00:20:27.247 "format": 0, 00:20:27.247 "firmware": 0, 00:20:27.247 "ns_manage": 0 00:20:27.247 }, 00:20:27.247 "multi_ctrlr": true, 00:20:27.247 "ana_reporting": false 00:20:27.247 }, 00:20:27.247 "vs": { 00:20:27.247 "nvme_version": "1.3" 00:20:27.247 }, 00:20:27.247 "ns_data": { 00:20:27.247 "id": 1, 00:20:27.247 "can_share": true 00:20:27.247 } 00:20:27.247 } 00:20:27.247 ], 00:20:27.247 "mp_policy": "active_passive" 00:20:27.247 } 00:20:27.247 } 00:20:27.247 ] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.6PwghPzD2Q 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.6PwghPzD2Q 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:27.247 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 [2024-07-15 12:59:45.420450] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:27.248 [2024-07-15 12:59:45.420564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6PwghPzD2Q 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 [2024-07-15 12:59:45.428476] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6PwghPzD2Q 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.248 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.248 [2024-07-15 12:59:45.436500] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.248 [2024-07-15 12:59:45.436560] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:27.506 nvme0n1 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.506 [ 00:20:27.506 { 00:20:27.506 "name": "nvme0n1", 00:20:27.506 "aliases": [ 00:20:27.506 "a58a4327-bd65-4606-bad3-8e438cb80f6a" 00:20:27.506 ], 00:20:27.506 "product_name": "NVMe disk", 00:20:27.506 "block_size": 512, 00:20:27.506 "num_blocks": 2097152, 00:20:27.506 "uuid": "a58a4327-bd65-4606-bad3-8e438cb80f6a", 00:20:27.506 "assigned_rate_limits": { 00:20:27.506 "rw_ios_per_sec": 0, 00:20:27.506 "rw_mbytes_per_sec": 0, 00:20:27.506 "r_mbytes_per_sec": 0, 00:20:27.506 "w_mbytes_per_sec": 0 00:20:27.506 }, 00:20:27.506 "claimed": false, 00:20:27.506 "zoned": false, 00:20:27.506 "supported_io_types": { 00:20:27.506 "read": true, 00:20:27.506 "write": true, 00:20:27.506 "unmap": false, 00:20:27.506 "flush": true, 00:20:27.506 "reset": true, 00:20:27.506 "nvme_admin": true, 00:20:27.506 "nvme_io": true, 00:20:27.506 "nvme_io_md": false, 00:20:27.506 "write_zeroes": true, 00:20:27.506 "zcopy": false, 00:20:27.506 "get_zone_info": false, 00:20:27.506 "zone_management": false, 00:20:27.506 "zone_append": false, 00:20:27.506 "compare": true, 00:20:27.506 "compare_and_write": true, 00:20:27.506 "abort": true, 00:20:27.506 "seek_hole": false, 00:20:27.506 "seek_data": false, 00:20:27.506 "copy": true, 00:20:27.506 "nvme_iov_md": false 00:20:27.506 }, 00:20:27.506 "memory_domains": [ 00:20:27.506 { 00:20:27.506 "dma_device_id": "system", 00:20:27.506 "dma_device_type": 1 00:20:27.506 } 00:20:27.506 ], 00:20:27.506 "driver_specific": { 00:20:27.506 "nvme": [ 00:20:27.506 { 00:20:27.506 "trid": { 00:20:27.506 "trtype": "TCP", 00:20:27.506 "adrfam": "IPv4", 00:20:27.506 "traddr": "10.0.0.2", 00:20:27.506 "trsvcid": "4421", 00:20:27.506 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:27.506 }, 00:20:27.506 "ctrlr_data": { 00:20:27.506 "cntlid": 3, 00:20:27.506 "vendor_id": "0x8086", 00:20:27.506 "model_number": "SPDK bdev Controller", 00:20:27.506 "serial_number": "00000000000000000000", 00:20:27.506 "firmware_revision": "24.09", 00:20:27.506 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.506 "oacs": { 00:20:27.506 "security": 0, 00:20:27.506 "format": 0, 00:20:27.506 "firmware": 0, 00:20:27.506 "ns_manage": 0 00:20:27.506 }, 00:20:27.506 "multi_ctrlr": true, 00:20:27.506 "ana_reporting": false 00:20:27.506 }, 00:20:27.506 "vs": { 00:20:27.506 "nvme_version": "1.3" 00:20:27.506 }, 00:20:27.506 "ns_data": { 00:20:27.506 "id": 1, 00:20:27.506 "can_share": true 00:20:27.506 } 00:20:27.506 } 00:20:27.506 ], 00:20:27.506 "mp_policy": "active_passive" 00:20:27.506 } 00:20:27.506 } 00:20:27.506 ] 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.6PwghPzD2Q 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.506 rmmod nvme_tcp 00:20:27.506 rmmod nvme_fabrics 00:20:27.506 rmmod nvme_keyring 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3443871 ']' 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3443871 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3443871 ']' 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3443871 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3443871 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3443871' 00:20:27.506 killing process with pid 3443871 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3443871 00:20:27.506 [2024-07-15 12:59:45.607711] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:27.506 [2024-07-15 12:59:45.607782] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:27.506 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3443871 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.766 12:59:45 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.670 12:59:47 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.670 00:20:29.670 real 0m5.561s 00:20:29.670 user 0m2.112s 00:20:29.670 sys 0m1.815s 00:20:29.670 12:59:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.670 12:59:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:29.670 ************************************ 00:20:29.670 END TEST nvmf_async_init 00:20:29.670 ************************************ 00:20:29.929 12:59:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:29.929 12:59:47 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:29.929 12:59:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:29.929 12:59:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.929 12:59:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 ************************************ 00:20:29.929 START TEST dma 00:20:29.929 ************************************ 00:20:29.929 12:59:47 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:29.929 * Looking for test storage... 00:20:29.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.929 12:59:47 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.929 12:59:47 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.929 12:59:47 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.929 12:59:47 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.929 12:59:47 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.929 12:59:47 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.929 12:59:47 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.929 12:59:47 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:29.929 12:59:47 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.929 12:59:47 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.929 12:59:47 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:29.929 12:59:47 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:29.929 00:20:29.929 real 0m0.075s 00:20:29.929 user 0m0.035s 00:20:29.929 sys 0m0.045s 00:20:29.929 12:59:48 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.929 12:59:48 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 ************************************ 00:20:29.929 END TEST dma 00:20:29.929 ************************************ 00:20:29.930 12:59:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:29.930 12:59:48 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:29.930 12:59:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:29.930 12:59:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.930 12:59:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.930 ************************************ 00:20:29.930 START TEST nvmf_identify 00:20:29.930 ************************************ 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:29.930 * Looking for test storage... 00:20:29.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.930 12:59:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:32.461 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:32.461 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:32.461 Found net devices under 0000:84:00.0: cvl_0_0 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:32.461 Found net devices under 0000:84:00.1: cvl_0_1 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:32.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:20:32.461 00:20:32.461 --- 10.0.0.2 ping statistics --- 00:20:32.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.461 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:32.461 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:32.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:20:32.461 00:20:32.461 --- 10.0.0.1 ping statistics --- 00:20:32.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.461 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3446012 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3446012 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3446012 ']' 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:32.462 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.462 [2024-07-15 12:59:50.363267] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:32.462 [2024-07-15 12:59:50.363339] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.462 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.462 [2024-07-15 12:59:50.431875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.462 [2024-07-15 12:59:50.539915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.462 [2024-07-15 12:59:50.539972] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.462 [2024-07-15 12:59:50.539986] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.462 [2024-07-15 12:59:50.539996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.462 [2024-07-15 12:59:50.540006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.462 [2024-07-15 12:59:50.540096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.462 [2024-07-15 12:59:50.540171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.462 [2024-07-15 12:59:50.540718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.462 [2024-07-15 12:59:50.540729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 [2024-07-15 12:59:50.677653] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 Malloc0 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 [2024-07-15 12:59:50.759153] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.721 [ 00:20:32.721 { 00:20:32.721 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:32.721 "subtype": "Discovery", 00:20:32.721 "listen_addresses": [ 00:20:32.721 { 00:20:32.721 "trtype": "TCP", 00:20:32.721 "adrfam": "IPv4", 00:20:32.721 "traddr": "10.0.0.2", 00:20:32.721 "trsvcid": "4420" 00:20:32.721 } 00:20:32.721 ], 00:20:32.721 "allow_any_host": true, 00:20:32.721 "hosts": [] 00:20:32.721 }, 00:20:32.721 { 00:20:32.721 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.721 "subtype": "NVMe", 00:20:32.721 "listen_addresses": [ 00:20:32.721 { 00:20:32.721 "trtype": "TCP", 00:20:32.721 "adrfam": "IPv4", 00:20:32.721 "traddr": "10.0.0.2", 00:20:32.721 "trsvcid": "4420" 00:20:32.721 } 00:20:32.721 ], 00:20:32.721 "allow_any_host": true, 00:20:32.721 "hosts": [], 00:20:32.721 "serial_number": "SPDK00000000000001", 00:20:32.721 "model_number": "SPDK bdev Controller", 00:20:32.721 "max_namespaces": 32, 00:20:32.721 "min_cntlid": 1, 00:20:32.721 "max_cntlid": 65519, 00:20:32.721 "namespaces": [ 00:20:32.721 { 00:20:32.721 "nsid": 1, 00:20:32.721 "bdev_name": "Malloc0", 00:20:32.721 "name": "Malloc0", 00:20:32.721 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:32.721 "eui64": "ABCDEF0123456789", 00:20:32.721 "uuid": "4a5334dc-c27e-4728-9066-ae286e03590d" 00:20:32.721 } 00:20:32.721 ] 00:20:32.721 } 00:20:32.721 ] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.721 12:59:50 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:32.721 [2024-07-15 12:59:50.799496] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:32.721 [2024-07-15 12:59:50.799535] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446040 ] 00:20:32.721 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.721 [2024-07-15 12:59:50.833076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:32.721 [2024-07-15 12:59:50.833168] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:32.721 [2024-07-15 12:59:50.833178] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:32.721 [2024-07-15 12:59:50.833193] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:32.722 [2024-07-15 12:59:50.833220] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:32.722 [2024-07-15 12:59:50.836820] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:32.722 [2024-07-15 12:59:50.836873] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xecc540 0 00:20:32.722 [2024-07-15 12:59:50.844754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:32.722 [2024-07-15 12:59:50.844775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:32.722 [2024-07-15 12:59:50.844799] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:32.722 [2024-07-15 12:59:50.844805] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:32.722 [2024-07-15 12:59:50.844860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.844873] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.844881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.844900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:32.722 [2024-07-15 12:59:50.844927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.851753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.851771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.851778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.851786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.851802] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:32.722 [2024-07-15 12:59:50.851813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:32.722 [2024-07-15 12:59:50.851821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:32.722 [2024-07-15 12:59:50.851870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.851879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.851885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.851896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.851920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.852080] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.852092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.852098] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.852125] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:32.722 [2024-07-15 12:59:50.852136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:32.722 [2024-07-15 12:59:50.852148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852160] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.852170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.852198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.852297] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.852311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.852317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.852331] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:32.722 [2024-07-15 12:59:50.852344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.852356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.852378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.852398] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.852490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.852503] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.852509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.852527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.852543] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852552] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.852567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.852587] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.852691] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.852704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.852710] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.852749] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:32.722 [2024-07-15 12:59:50.852758] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.852773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.852882] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:32.722 [2024-07-15 12:59:50.852891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.852905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.852918] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.852932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.852955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.853093] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.853108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.853114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.853128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:32.722 [2024-07-15 12:59:50.853143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.853167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.853187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.853272] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.853286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.853291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.853305] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:32.722 [2024-07-15 12:59:50.853313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:32.722 [2024-07-15 12:59:50.853325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:32.722 [2024-07-15 12:59:50.853339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:32.722 [2024-07-15 12:59:50.853354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853361] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.722 [2024-07-15 12:59:50.853371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.722 [2024-07-15 12:59:50.853392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.722 [2024-07-15 12:59:50.853541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.722 [2024-07-15 12:59:50.853555] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.722 [2024-07-15 12:59:50.853561] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853567] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecc540): datao=0, datal=4096, cccid=0 00:20:32.722 [2024-07-15 12:59:50.853574] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2c3c0) on tqpair(0xecc540): expected_datao=0, payload_size=4096 00:20:32.722 [2024-07-15 12:59:50.853581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853592] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853599] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.722 [2024-07-15 12:59:50.853643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.722 [2024-07-15 12:59:50.853650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.722 [2024-07-15 12:59:50.853656] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.722 [2024-07-15 12:59:50.853669] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:32.722 [2024-07-15 12:59:50.853681] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:32.723 [2024-07-15 12:59:50.853689] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:32.723 [2024-07-15 12:59:50.853697] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:32.723 [2024-07-15 12:59:50.853705] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:32.723 [2024-07-15 12:59:50.853713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:32.723 [2024-07-15 12:59:50.853776] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:32.723 [2024-07-15 12:59:50.853807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.853815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.853821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.853832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.723 [2024-07-15 12:59:50.853855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.723 [2024-07-15 12:59:50.853997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.723 [2024-07-15 12:59:50.854012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.723 [2024-07-15 12:59:50.854018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.723 [2024-07-15 12:59:50.854052] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.723 [2024-07-15 12:59:50.854086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854098] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.723 [2024-07-15 12:59:50.854132] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854138] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854144] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.723 [2024-07-15 12:59:50.854161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.723 [2024-07-15 12:59:50.854194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:32.723 [2024-07-15 12:59:50.854222] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:32.723 [2024-07-15 12:59:50.854234] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.723 [2024-07-15 12:59:50.854271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c3c0, cid 0, qid 0 00:20:32.723 [2024-07-15 12:59:50.854281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c540, cid 1, qid 0 00:20:32.723 [2024-07-15 12:59:50.854289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c6c0, cid 2, qid 0 00:20:32.723 [2024-07-15 12:59:50.854296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c840, cid 3, qid 0 00:20:32.723 [2024-07-15 12:59:50.854303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c9c0, cid 4, qid 0 00:20:32.723 [2024-07-15 12:59:50.854450] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.723 [2024-07-15 12:59:50.854463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.723 [2024-07-15 12:59:50.854469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c9c0) on tqpair=0xecc540 00:20:32.723 [2024-07-15 12:59:50.854484] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:32.723 [2024-07-15 12:59:50.854492] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:32.723 [2024-07-15 12:59:50.854509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854518] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.854527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.723 [2024-07-15 12:59:50.854547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c9c0, cid 4, qid 0 00:20:32.723 [2024-07-15 12:59:50.854655] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.723 [2024-07-15 12:59:50.854669] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.723 [2024-07-15 12:59:50.854676] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854681] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecc540): datao=0, datal=4096, cccid=4 00:20:32.723 [2024-07-15 12:59:50.854688] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2c9c0) on tqpair(0xecc540): expected_datao=0, payload_size=4096 00:20:32.723 [2024-07-15 12:59:50.854695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854711] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.854719] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.898755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.723 [2024-07-15 12:59:50.898774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.723 [2024-07-15 12:59:50.898796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.898803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c9c0) on tqpair=0xecc540 00:20:32.723 [2024-07-15 12:59:50.898828] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:32.723 [2024-07-15 12:59:50.898869] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.898880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.898891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.723 [2024-07-15 12:59:50.898903] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.898911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.898917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xecc540) 00:20:32.723 [2024-07-15 12:59:50.898926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.723 [2024-07-15 12:59:50.898955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c9c0, cid 4, qid 0 00:20:32.723 [2024-07-15 12:59:50.898967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2cb40, cid 5, qid 0 00:20:32.723 [2024-07-15 12:59:50.899118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.723 [2024-07-15 12:59:50.899130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.723 [2024-07-15 12:59:50.899136] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.899142] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecc540): datao=0, datal=1024, cccid=4 00:20:32.723 [2024-07-15 12:59:50.899150] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2c9c0) on tqpair(0xecc540): expected_datao=0, payload_size=1024 00:20:32.723 [2024-07-15 12:59:50.899172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.899181] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.899188] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.899196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.723 [2024-07-15 12:59:50.899205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.723 [2024-07-15 12:59:50.899210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.723 [2024-07-15 12:59:50.899217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2cb40) on tqpair=0xecc540 00:20:32.982 [2024-07-15 12:59:50.939908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.982 [2024-07-15 12:59:50.939925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.982 [2024-07-15 12:59:50.939932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.939939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c9c0) on tqpair=0xecc540 00:20:32.982 [2024-07-15 12:59:50.939957] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.939965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecc540) 00:20:32.982 [2024-07-15 12:59:50.939976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.982 [2024-07-15 12:59:50.940004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c9c0, cid 4, qid 0 00:20:32.982 [2024-07-15 12:59:50.940169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.982 [2024-07-15 12:59:50.940183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.982 [2024-07-15 12:59:50.940190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940196] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecc540): datao=0, datal=3072, cccid=4 00:20:32.982 [2024-07-15 12:59:50.940203] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2c9c0) on tqpair(0xecc540): expected_datao=0, payload_size=3072 00:20:32.982 [2024-07-15 12:59:50.940210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940224] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940232] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.982 [2024-07-15 12:59:50.940273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.982 [2024-07-15 12:59:50.940279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c9c0) on tqpair=0xecc540 00:20:32.982 [2024-07-15 12:59:50.940301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940309] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecc540) 00:20:32.982 [2024-07-15 12:59:50.940319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.982 [2024-07-15 12:59:50.940346] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c9c0, cid 4, qid 0 00:20:32.982 [2024-07-15 12:59:50.940456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.982 [2024-07-15 12:59:50.940469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.982 [2024-07-15 12:59:50.940476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940482] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecc540): datao=0, datal=8, cccid=4 00:20:32.982 [2024-07-15 12:59:50.940489] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2c9c0) on tqpair(0xecc540): expected_datao=0, payload_size=8 00:20:32.982 [2024-07-15 12:59:50.940496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940505] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.940512] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.982 [2024-07-15 12:59:50.980930] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.982 [2024-07-15 12:59:50.980949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.982 [2024-07-15 12:59:50.980957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.983 [2024-07-15 12:59:50.980964] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c9c0) on tqpair=0xecc540 00:20:32.983 ===================================================== 00:20:32.983 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:32.983 ===================================================== 00:20:32.983 Controller Capabilities/Features 00:20:32.983 ================================ 00:20:32.983 Vendor ID: 0000 00:20:32.983 Subsystem Vendor ID: 0000 00:20:32.983 Serial Number: .................... 00:20:32.983 Model Number: ........................................ 00:20:32.983 Firmware Version: 24.09 00:20:32.983 Recommended Arb Burst: 0 00:20:32.983 IEEE OUI Identifier: 00 00 00 00:20:32.983 Multi-path I/O 00:20:32.983 May have multiple subsystem ports: No 00:20:32.983 May have multiple controllers: No 00:20:32.983 Associated with SR-IOV VF: No 00:20:32.983 Max Data Transfer Size: 131072 00:20:32.983 Max Number of Namespaces: 0 00:20:32.983 Max Number of I/O Queues: 1024 00:20:32.983 NVMe Specification Version (VS): 1.3 00:20:32.983 NVMe Specification Version (Identify): 1.3 00:20:32.983 Maximum Queue Entries: 128 00:20:32.983 Contiguous Queues Required: Yes 00:20:32.983 Arbitration Mechanisms Supported 00:20:32.983 Weighted Round Robin: Not Supported 00:20:32.983 Vendor Specific: Not Supported 00:20:32.983 Reset Timeout: 15000 ms 00:20:32.983 Doorbell Stride: 4 bytes 00:20:32.983 NVM Subsystem Reset: Not Supported 00:20:32.983 Command Sets Supported 00:20:32.983 NVM Command Set: Supported 00:20:32.983 Boot Partition: Not Supported 00:20:32.983 Memory Page Size Minimum: 4096 bytes 00:20:32.983 Memory Page Size Maximum: 4096 bytes 00:20:32.983 Persistent Memory Region: Not Supported 00:20:32.983 Optional Asynchronous Events Supported 00:20:32.983 Namespace Attribute Notices: Not Supported 00:20:32.983 Firmware Activation Notices: Not Supported 00:20:32.983 ANA Change Notices: Not Supported 00:20:32.983 PLE Aggregate Log Change Notices: Not Supported 00:20:32.983 LBA Status Info Alert Notices: Not Supported 00:20:32.983 EGE Aggregate Log Change Notices: Not Supported 00:20:32.983 Normal NVM Subsystem Shutdown event: Not Supported 00:20:32.983 Zone Descriptor Change Notices: Not Supported 00:20:32.983 Discovery Log Change Notices: Supported 00:20:32.983 Controller Attributes 00:20:32.983 128-bit Host Identifier: Not Supported 00:20:32.983 Non-Operational Permissive Mode: Not Supported 00:20:32.983 NVM Sets: Not Supported 00:20:32.983 Read Recovery Levels: Not Supported 00:20:32.983 Endurance Groups: Not Supported 00:20:32.983 Predictable Latency Mode: Not Supported 00:20:32.983 Traffic Based Keep ALive: Not Supported 00:20:32.983 Namespace Granularity: Not Supported 00:20:32.983 SQ Associations: Not Supported 00:20:32.983 UUID List: Not Supported 00:20:32.983 Multi-Domain Subsystem: Not Supported 00:20:32.983 Fixed Capacity Management: Not Supported 00:20:32.983 Variable Capacity Management: Not Supported 00:20:32.983 Delete Endurance Group: Not Supported 00:20:32.983 Delete NVM Set: Not Supported 00:20:32.983 Extended LBA Formats Supported: Not Supported 00:20:32.983 Flexible Data Placement Supported: Not Supported 00:20:32.983 00:20:32.983 Controller Memory Buffer Support 00:20:32.983 ================================ 00:20:32.983 Supported: No 00:20:32.983 00:20:32.983 Persistent Memory Region Support 00:20:32.983 ================================ 00:20:32.983 Supported: No 00:20:32.983 00:20:32.983 Admin Command Set Attributes 00:20:32.983 ============================ 00:20:32.983 Security Send/Receive: Not Supported 00:20:32.983 Format NVM: Not Supported 00:20:32.983 Firmware Activate/Download: Not Supported 00:20:32.983 Namespace Management: Not Supported 00:20:32.983 Device Self-Test: Not Supported 00:20:32.983 Directives: Not Supported 00:20:32.983 NVMe-MI: Not Supported 00:20:32.983 Virtualization Management: Not Supported 00:20:32.983 Doorbell Buffer Config: Not Supported 00:20:32.983 Get LBA Status Capability: Not Supported 00:20:32.983 Command & Feature Lockdown Capability: Not Supported 00:20:32.983 Abort Command Limit: 1 00:20:32.983 Async Event Request Limit: 4 00:20:32.983 Number of Firmware Slots: N/A 00:20:32.983 Firmware Slot 1 Read-Only: N/A 00:20:32.983 Firmware Activation Without Reset: N/A 00:20:32.983 Multiple Update Detection Support: N/A 00:20:32.983 Firmware Update Granularity: No Information Provided 00:20:32.983 Per-Namespace SMART Log: No 00:20:32.983 Asymmetric Namespace Access Log Page: Not Supported 00:20:32.983 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:32.983 Command Effects Log Page: Not Supported 00:20:32.983 Get Log Page Extended Data: Supported 00:20:32.983 Telemetry Log Pages: Not Supported 00:20:32.983 Persistent Event Log Pages: Not Supported 00:20:32.983 Supported Log Pages Log Page: May Support 00:20:32.983 Commands Supported & Effects Log Page: Not Supported 00:20:32.983 Feature Identifiers & Effects Log Page:May Support 00:20:32.983 NVMe-MI Commands & Effects Log Page: May Support 00:20:32.983 Data Area 4 for Telemetry Log: Not Supported 00:20:32.983 Error Log Page Entries Supported: 128 00:20:32.983 Keep Alive: Not Supported 00:20:32.983 00:20:32.983 NVM Command Set Attributes 00:20:32.983 ========================== 00:20:32.983 Submission Queue Entry Size 00:20:32.983 Max: 1 00:20:32.983 Min: 1 00:20:32.983 Completion Queue Entry Size 00:20:32.983 Max: 1 00:20:32.983 Min: 1 00:20:32.983 Number of Namespaces: 0 00:20:32.983 Compare Command: Not Supported 00:20:32.983 Write Uncorrectable Command: Not Supported 00:20:32.983 Dataset Management Command: Not Supported 00:20:32.983 Write Zeroes Command: Not Supported 00:20:32.983 Set Features Save Field: Not Supported 00:20:32.983 Reservations: Not Supported 00:20:32.983 Timestamp: Not Supported 00:20:32.983 Copy: Not Supported 00:20:32.983 Volatile Write Cache: Not Present 00:20:32.983 Atomic Write Unit (Normal): 1 00:20:32.983 Atomic Write Unit (PFail): 1 00:20:32.983 Atomic Compare & Write Unit: 1 00:20:32.983 Fused Compare & Write: Supported 00:20:32.983 Scatter-Gather List 00:20:32.983 SGL Command Set: Supported 00:20:32.983 SGL Keyed: Supported 00:20:32.983 SGL Bit Bucket Descriptor: Not Supported 00:20:32.983 SGL Metadata Pointer: Not Supported 00:20:32.983 Oversized SGL: Not Supported 00:20:32.983 SGL Metadata Address: Not Supported 00:20:32.983 SGL Offset: Supported 00:20:32.983 Transport SGL Data Block: Not Supported 00:20:32.983 Replay Protected Memory Block: Not Supported 00:20:32.983 00:20:32.983 Firmware Slot Information 00:20:32.983 ========================= 00:20:32.983 Active slot: 0 00:20:32.983 00:20:32.983 00:20:32.983 Error Log 00:20:32.983 ========= 00:20:32.983 00:20:32.983 Active Namespaces 00:20:32.983 ================= 00:20:32.983 Discovery Log Page 00:20:32.983 ================== 00:20:32.983 Generation Counter: 2 00:20:32.983 Number of Records: 2 00:20:32.983 Record Format: 0 00:20:32.983 00:20:32.983 Discovery Log Entry 0 00:20:32.983 ---------------------- 00:20:32.983 Transport Type: 3 (TCP) 00:20:32.983 Address Family: 1 (IPv4) 00:20:32.983 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:32.983 Entry Flags: 00:20:32.983 Duplicate Returned Information: 1 00:20:32.983 Explicit Persistent Connection Support for Discovery: 1 00:20:32.983 Transport Requirements: 00:20:32.983 Secure Channel: Not Required 00:20:32.983 Port ID: 0 (0x0000) 00:20:32.983 Controller ID: 65535 (0xffff) 00:20:32.983 Admin Max SQ Size: 128 00:20:32.983 Transport Service Identifier: 4420 00:20:32.983 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:32.983 Transport Address: 10.0.0.2 00:20:32.983 Discovery Log Entry 1 00:20:32.983 ---------------------- 00:20:32.983 Transport Type: 3 (TCP) 00:20:32.983 Address Family: 1 (IPv4) 00:20:32.983 Subsystem Type: 2 (NVM Subsystem) 00:20:32.983 Entry Flags: 00:20:32.983 Duplicate Returned Information: 0 00:20:32.983 Explicit Persistent Connection Support for Discovery: 0 00:20:32.983 Transport Requirements: 00:20:32.983 Secure Channel: Not Required 00:20:32.983 Port ID: 0 (0x0000) 00:20:32.983 Controller ID: 65535 (0xffff) 00:20:32.983 Admin Max SQ Size: 128 00:20:32.983 Transport Service Identifier: 4420 00:20:32.983 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:32.983 Transport Address: 10.0.0.2 [2024-07-15 12:59:50.981111] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:32.983 [2024-07-15 12:59:50.981133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c3c0) on tqpair=0xecc540 00:20:32.983 [2024-07-15 12:59:50.981144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.983 [2024-07-15 12:59:50.981153] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c540) on tqpair=0xecc540 00:20:32.983 [2024-07-15 12:59:50.981160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.983 [2024-07-15 12:59:50.981168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c6c0) on tqpair=0xecc540 00:20:32.983 [2024-07-15 12:59:50.981175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.983 [2024-07-15 12:59:50.981183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c840) on tqpair=0xecc540 00:20:32.984 [2024-07-15 12:59:50.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.984 [2024-07-15 12:59:50.981216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecc540) 00:20:32.984 [2024-07-15 12:59:50.981241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:50.981277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c840, cid 3, qid 0 00:20:32.984 [2024-07-15 12:59:50.981456] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:50.981467] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:50.981474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981480] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c840) on tqpair=0xecc540 00:20:32.984 [2024-07-15 12:59:50.981491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecc540) 00:20:32.984 [2024-07-15 12:59:50.981514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:50.981539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c840, cid 3, qid 0 00:20:32.984 [2024-07-15 12:59:50.981639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:50.981652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:50.981658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981664] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c840) on tqpair=0xecc540 00:20:32.984 [2024-07-15 12:59:50.981672] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:32.984 [2024-07-15 12:59:50.981679] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:32.984 [2024-07-15 12:59:50.981695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.981709] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecc540) 00:20:32.984 [2024-07-15 12:59:50.981733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:50.985769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2c840, cid 3, qid 0 00:20:32.984 [2024-07-15 12:59:50.985979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:50.985991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:50.985997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:50.986004] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2c840) on tqpair=0xecc540 00:20:32.984 [2024-07-15 12:59:50.986018] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:20:32.984 00:20:32.984 12:59:51 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:32.984 [2024-07-15 12:59:51.017827] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:32.984 [2024-07-15 12:59:51.017867] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3446166 ] 00:20:32.984 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.984 [2024-07-15 12:59:51.051505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:32.984 [2024-07-15 12:59:51.051557] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:32.984 [2024-07-15 12:59:51.051567] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:32.984 [2024-07-15 12:59:51.051580] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:32.984 [2024-07-15 12:59:51.051589] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:32.984 [2024-07-15 12:59:51.051803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:32.984 [2024-07-15 12:59:51.051844] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x576540 0 00:20:32.984 [2024-07-15 12:59:51.058773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:32.984 [2024-07-15 12:59:51.058791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:32.984 [2024-07-15 12:59:51.058798] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:32.984 [2024-07-15 12:59:51.058804] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:32.984 [2024-07-15 12:59:51.058858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.058870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.058876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.984 [2024-07-15 12:59:51.058890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:32.984 [2024-07-15 12:59:51.058916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.984 [2024-07-15 12:59:51.066750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:51.066767] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:51.066774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.066781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.984 [2024-07-15 12:59:51.066799] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:32.984 [2024-07-15 12:59:51.066810] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:32.984 [2024-07-15 12:59:51.066819] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:32.984 [2024-07-15 12:59:51.066836] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.066845] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.066851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.984 [2024-07-15 12:59:51.066861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:51.066884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.984 [2024-07-15 12:59:51.067038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:51.067050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:51.067056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067063] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.984 [2024-07-15 12:59:51.067070] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:32.984 [2024-07-15 12:59:51.067083] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:32.984 [2024-07-15 12:59:51.067094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067107] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.984 [2024-07-15 12:59:51.067121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:51.067142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.984 [2024-07-15 12:59:51.067231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:51.067242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:51.067249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067255] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.984 [2024-07-15 12:59:51.067262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:32.984 [2024-07-15 12:59:51.067275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:32.984 [2024-07-15 12:59:51.067286] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.984 [2024-07-15 12:59:51.067309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:51.067328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.984 [2024-07-15 12:59:51.067414] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:51.067428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:51.067435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.984 [2024-07-15 12:59:51.067449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:32.984 [2024-07-15 12:59:51.067465] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067473] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.984 [2024-07-15 12:59:51.067489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.984 [2024-07-15 12:59:51.067509] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.984 [2024-07-15 12:59:51.067593] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.984 [2024-07-15 12:59:51.067606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.984 [2024-07-15 12:59:51.067612] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.984 [2024-07-15 12:59:51.067619] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.984 [2024-07-15 12:59:51.067626] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:32.984 [2024-07-15 12:59:51.067633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:32.984 [2024-07-15 12:59:51.067646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:32.984 [2024-07-15 12:59:51.067755] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:32.984 [2024-07-15 12:59:51.067765] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:32.985 [2024-07-15 12:59:51.067777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.067803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.067810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.067821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.985 [2024-07-15 12:59:51.067843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.985 [2024-07-15 12:59:51.068063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.068075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.068082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.068096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:32.985 [2024-07-15 12:59:51.068127] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.985 [2024-07-15 12:59:51.068171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.985 [2024-07-15 12:59:51.068263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.068277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.068283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068290] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.068297] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:32.985 [2024-07-15 12:59:51.068304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.068317] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:32.985 [2024-07-15 12:59:51.068329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.068342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.985 [2024-07-15 12:59:51.068379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.985 [2024-07-15 12:59:51.068490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.985 [2024-07-15 12:59:51.068502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.985 [2024-07-15 12:59:51.068508] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068514] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=4096, cccid=0 00:20:32.985 [2024-07-15 12:59:51.068521] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d63c0) on tqpair(0x576540): expected_datao=0, payload_size=4096 00:20:32.985 [2024-07-15 12:59:51.068528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068552] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068567] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.068577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.068583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.068599] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:32.985 [2024-07-15 12:59:51.068611] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:32.985 [2024-07-15 12:59:51.068618] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:32.985 [2024-07-15 12:59:51.068624] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:32.985 [2024-07-15 12:59:51.068631] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:32.985 [2024-07-15 12:59:51.068639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.068652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.068663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.985 [2024-07-15 12:59:51.068705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.985 [2024-07-15 12:59:51.068830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.068846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.068852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.068869] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068882] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.985 [2024-07-15 12:59:51.068901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.985 [2024-07-15 12:59:51.068931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.985 [2024-07-15 12:59:51.068961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.068977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.068986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.985 [2024-07-15 12:59:51.068995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.069055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.985 [2024-07-15 12:59:51.069078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d63c0, cid 0, qid 0 00:20:32.985 [2024-07-15 12:59:51.069089] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6540, cid 1, qid 0 00:20:32.985 [2024-07-15 12:59:51.069096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d66c0, cid 2, qid 0 00:20:32.985 [2024-07-15 12:59:51.069103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:32.985 [2024-07-15 12:59:51.069110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.985 [2024-07-15 12:59:51.069271] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.069285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.069291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.069305] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:32.985 [2024-07-15 12:59:51.069313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069360] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.069370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:32.985 [2024-07-15 12:59:51.069389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.985 [2024-07-15 12:59:51.069581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.985 [2024-07-15 12:59:51.069593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.985 [2024-07-15 12:59:51.069599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.985 [2024-07-15 12:59:51.069673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:32.985 [2024-07-15 12:59:51.069705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.985 [2024-07-15 12:59:51.069715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.985 [2024-07-15 12:59:51.069747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.985 [2024-07-15 12:59:51.069803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.985 [2024-07-15 12:59:51.070007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.985 [2024-07-15 12:59:51.070019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.986 [2024-07-15 12:59:51.070026] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070032] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=4096, cccid=4 00:20:32.986 [2024-07-15 12:59:51.070040] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d69c0) on tqpair(0x576540): expected_datao=0, payload_size=4096 00:20:32.986 [2024-07-15 12:59:51.070047] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070057] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070079] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070091] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.070101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.070107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.070142] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:32.986 [2024-07-15 12:59:51.070163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.070180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.070193] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.070210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.070230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.986 [2024-07-15 12:59:51.070355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.986 [2024-07-15 12:59:51.070370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.986 [2024-07-15 12:59:51.070376] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=4096, cccid=4 00:20:32.986 [2024-07-15 12:59:51.070390] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d69c0) on tqpair(0x576540): expected_datao=0, payload_size=4096 00:20:32.986 [2024-07-15 12:59:51.070397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070413] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.070421] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.111749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.111766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.111773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.111780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.111818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.111842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.111857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.111865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.111876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.111900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.986 [2024-07-15 12:59:51.112059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.986 [2024-07-15 12:59:51.112071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.986 [2024-07-15 12:59:51.112077] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.112084] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=4096, cccid=4 00:20:32.986 [2024-07-15 12:59:51.112091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d69c0) on tqpair(0x576540): expected_datao=0, payload_size=4096 00:20:32.986 [2024-07-15 12:59:51.112113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.112130] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.112138] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.152880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.152898] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.152905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.152912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.152925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.152994] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:32.986 [2024-07-15 12:59:51.153001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:32.986 [2024-07-15 12:59:51.153010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:32.986 [2024-07-15 12:59:51.153029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153037] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.153062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.153074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153090] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.153100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.986 [2024-07-15 12:59:51.153125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.986 [2024-07-15 12:59:51.153137] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6b40, cid 5, qid 0 00:20:32.986 [2024-07-15 12:59:51.153242] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.153256] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.153262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.153279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.153287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.153293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6b40) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.153314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.153332] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.153352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6b40, cid 5, qid 0 00:20:32.986 [2024-07-15 12:59:51.153440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.153454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.153460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6b40) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.153482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.153500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.153520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6b40, cid 5, qid 0 00:20:32.986 [2024-07-15 12:59:51.153608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.153621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.153627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6b40) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.153649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x576540) 00:20:32.986 [2024-07-15 12:59:51.153667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.986 [2024-07-15 12:59:51.153686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6b40, cid 5, qid 0 00:20:32.986 [2024-07-15 12:59:51.153793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:32.986 [2024-07-15 12:59:51.153809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:32.986 [2024-07-15 12:59:51.153816] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6b40) on tqpair=0x576540 00:20:32.986 [2024-07-15 12:59:51.153850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.986 [2024-07-15 12:59:51.153861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x576540) 00:20:32.987 [2024-07-15 12:59:51.153872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.987 [2024-07-15 12:59:51.153885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.153892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x576540) 00:20:32.987 [2024-07-15 12:59:51.153902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.987 [2024-07-15 12:59:51.153913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.153920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x576540) 00:20:32.987 [2024-07-15 12:59:51.153930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.987 [2024-07-15 12:59:51.153941] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.153949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x576540) 00:20:32.987 [2024-07-15 12:59:51.153958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:32.987 [2024-07-15 12:59:51.153981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6b40, cid 5, qid 0 00:20:32.987 [2024-07-15 12:59:51.153992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d69c0, cid 4, qid 0 00:20:32.987 [2024-07-15 12:59:51.154000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6cc0, cid 6, qid 0 00:20:32.987 [2024-07-15 12:59:51.154008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6e40, cid 7, qid 0 00:20:32.987 [2024-07-15 12:59:51.154333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.987 [2024-07-15 12:59:51.154345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.987 [2024-07-15 12:59:51.154352] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154358] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=8192, cccid=5 00:20:32.987 [2024-07-15 12:59:51.154365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d6b40) on tqpair(0x576540): expected_datao=0, payload_size=8192 00:20:32.987 [2024-07-15 12:59:51.154372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154392] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154401] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154409] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.987 [2024-07-15 12:59:51.154417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.987 [2024-07-15 12:59:51.154423] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154429] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=512, cccid=4 00:20:32.987 [2024-07-15 12:59:51.154436] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d69c0) on tqpair(0x576540): expected_datao=0, payload_size=512 00:20:32.987 [2024-07-15 12:59:51.154442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154451] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154457] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.987 [2024-07-15 12:59:51.154473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.987 [2024-07-15 12:59:51.154483] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154489] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=512, cccid=6 00:20:32.987 [2024-07-15 12:59:51.154496] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d6cc0) on tqpair(0x576540): expected_datao=0, payload_size=512 00:20:32.987 [2024-07-15 12:59:51.154503] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154512] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154518] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:32.987 [2024-07-15 12:59:51.154534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:32.987 [2024-07-15 12:59:51.154540] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154546] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x576540): datao=0, datal=4096, cccid=7 00:20:32.987 [2024-07-15 12:59:51.154552] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5d6e40) on tqpair(0x576540): expected_datao=0, payload_size=4096 00:20:32.987 [2024-07-15 12:59:51.154559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154568] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:32.987 [2024-07-15 12:59:51.154575] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.246 [2024-07-15 12:59:51.194890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.246 [2024-07-15 12:59:51.194909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.246 [2024-07-15 12:59:51.194916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.247 [2024-07-15 12:59:51.194923] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6b40) on tqpair=0x576540 00:20:33.247 [2024-07-15 12:59:51.194943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.247 [2024-07-15 12:59:51.194954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.247 [2024-07-15 12:59:51.194960] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.247 [2024-07-15 12:59:51.194967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d69c0) on tqpair=0x576540 00:20:33.247 [2024-07-15 12:59:51.194982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.247 [2024-07-15 12:59:51.194992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.247 [2024-07-15 12:59:51.194998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.247 [2024-07-15 12:59:51.195005] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6cc0) on tqpair=0x576540 00:20:33.247 [2024-07-15 12:59:51.195015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.247 [2024-07-15 12:59:51.195024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.247 [2024-07-15 12:59:51.195030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.247 [2024-07-15 12:59:51.195037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6e40) on tqpair=0x576540 00:20:33.247 ===================================================== 00:20:33.247 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.247 ===================================================== 00:20:33.247 Controller Capabilities/Features 00:20:33.247 ================================ 00:20:33.247 Vendor ID: 8086 00:20:33.247 Subsystem Vendor ID: 8086 00:20:33.247 Serial Number: SPDK00000000000001 00:20:33.247 Model Number: SPDK bdev Controller 00:20:33.247 Firmware Version: 24.09 00:20:33.247 Recommended Arb Burst: 6 00:20:33.247 IEEE OUI Identifier: e4 d2 5c 00:20:33.247 Multi-path I/O 00:20:33.247 May have multiple subsystem ports: Yes 00:20:33.247 May have multiple controllers: Yes 00:20:33.247 Associated with SR-IOV VF: No 00:20:33.247 Max Data Transfer Size: 131072 00:20:33.247 Max Number of Namespaces: 32 00:20:33.247 Max Number of I/O Queues: 127 00:20:33.247 NVMe Specification Version (VS): 1.3 00:20:33.247 NVMe Specification Version (Identify): 1.3 00:20:33.247 Maximum Queue Entries: 128 00:20:33.247 Contiguous Queues Required: Yes 00:20:33.247 Arbitration Mechanisms Supported 00:20:33.247 Weighted Round Robin: Not Supported 00:20:33.247 Vendor Specific: Not Supported 00:20:33.247 Reset Timeout: 15000 ms 00:20:33.247 Doorbell Stride: 4 bytes 00:20:33.247 NVM Subsystem Reset: Not Supported 00:20:33.247 Command Sets Supported 00:20:33.247 NVM Command Set: Supported 00:20:33.247 Boot Partition: Not Supported 00:20:33.247 Memory Page Size Minimum: 4096 bytes 00:20:33.247 Memory Page Size Maximum: 4096 bytes 00:20:33.247 Persistent Memory Region: Not Supported 00:20:33.247 Optional Asynchronous Events Supported 00:20:33.247 Namespace Attribute Notices: Supported 00:20:33.247 Firmware Activation Notices: Not Supported 00:20:33.247 ANA Change Notices: Not Supported 00:20:33.247 PLE Aggregate Log Change Notices: Not Supported 00:20:33.247 LBA Status Info Alert Notices: Not Supported 00:20:33.247 EGE Aggregate Log Change Notices: Not Supported 00:20:33.247 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.247 Zone Descriptor Change Notices: Not Supported 00:20:33.247 Discovery Log Change Notices: Not Supported 00:20:33.247 Controller Attributes 00:20:33.247 128-bit Host Identifier: Supported 00:20:33.247 Non-Operational Permissive Mode: Not Supported 00:20:33.247 NVM Sets: Not Supported 00:20:33.247 Read Recovery Levels: Not Supported 00:20:33.247 Endurance Groups: Not Supported 00:20:33.247 Predictable Latency Mode: Not Supported 00:20:33.247 Traffic Based Keep ALive: Not Supported 00:20:33.247 Namespace Granularity: Not Supported 00:20:33.247 SQ Associations: Not Supported 00:20:33.247 UUID List: Not Supported 00:20:33.247 Multi-Domain Subsystem: Not Supported 00:20:33.247 Fixed Capacity Management: Not Supported 00:20:33.247 Variable Capacity Management: Not Supported 00:20:33.247 Delete Endurance Group: Not Supported 00:20:33.247 Delete NVM Set: Not Supported 00:20:33.247 Extended LBA Formats Supported: Not Supported 00:20:33.247 Flexible Data Placement Supported: Not Supported 00:20:33.247 00:20:33.247 Controller Memory Buffer Support 00:20:33.247 ================================ 00:20:33.247 Supported: No 00:20:33.247 00:20:33.247 Persistent Memory Region Support 00:20:33.247 ================================ 00:20:33.247 Supported: No 00:20:33.247 00:20:33.247 Admin Command Set Attributes 00:20:33.247 ============================ 00:20:33.247 Security Send/Receive: Not Supported 00:20:33.247 Format NVM: Not Supported 00:20:33.247 Firmware Activate/Download: Not Supported 00:20:33.247 Namespace Management: Not Supported 00:20:33.247 Device Self-Test: Not Supported 00:20:33.247 Directives: Not Supported 00:20:33.247 NVMe-MI: Not Supported 00:20:33.247 Virtualization Management: Not Supported 00:20:33.247 Doorbell Buffer Config: Not Supported 00:20:33.247 Get LBA Status Capability: Not Supported 00:20:33.247 Command & Feature Lockdown Capability: Not Supported 00:20:33.247 Abort Command Limit: 4 00:20:33.247 Async Event Request Limit: 4 00:20:33.247 Number of Firmware Slots: N/A 00:20:33.247 Firmware Slot 1 Read-Only: N/A 00:20:33.247 Firmware Activation Without Reset: N/A 00:20:33.247 Multiple Update Detection Support: N/A 00:20:33.247 Firmware Update Granularity: No Information Provided 00:20:33.247 Per-Namespace SMART Log: No 00:20:33.247 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.247 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:33.247 Command Effects Log Page: Supported 00:20:33.247 Get Log Page Extended Data: Supported 00:20:33.247 Telemetry Log Pages: Not Supported 00:20:33.247 Persistent Event Log Pages: Not Supported 00:20:33.247 Supported Log Pages Log Page: May Support 00:20:33.247 Commands Supported & Effects Log Page: Not Supported 00:20:33.247 Feature Identifiers & Effects Log Page:May Support 00:20:33.247 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.247 Data Area 4 for Telemetry Log: Not Supported 00:20:33.247 Error Log Page Entries Supported: 128 00:20:33.247 Keep Alive: Supported 00:20:33.247 Keep Alive Granularity: 10000 ms 00:20:33.247 00:20:33.247 NVM Command Set Attributes 00:20:33.247 ========================== 00:20:33.247 Submission Queue Entry Size 00:20:33.247 Max: 64 00:20:33.247 Min: 64 00:20:33.247 Completion Queue Entry Size 00:20:33.247 Max: 16 00:20:33.247 Min: 16 00:20:33.247 Number of Namespaces: 32 00:20:33.247 Compare Command: Supported 00:20:33.247 Write Uncorrectable Command: Not Supported 00:20:33.247 Dataset Management Command: Supported 00:20:33.247 Write Zeroes Command: Supported 00:20:33.247 Set Features Save Field: Not Supported 00:20:33.247 Reservations: Supported 00:20:33.247 Timestamp: Not Supported 00:20:33.247 Copy: Supported 00:20:33.247 Volatile Write Cache: Present 00:20:33.247 Atomic Write Unit (Normal): 1 00:20:33.247 Atomic Write Unit (PFail): 1 00:20:33.247 Atomic Compare & Write Unit: 1 00:20:33.247 Fused Compare & Write: Supported 00:20:33.247 Scatter-Gather List 00:20:33.247 SGL Command Set: Supported 00:20:33.247 SGL Keyed: Supported 00:20:33.247 SGL Bit Bucket Descriptor: Not Supported 00:20:33.247 SGL Metadata Pointer: Not Supported 00:20:33.247 Oversized SGL: Not Supported 00:20:33.247 SGL Metadata Address: Not Supported 00:20:33.247 SGL Offset: Supported 00:20:33.247 Transport SGL Data Block: Not Supported 00:20:33.247 Replay Protected Memory Block: Not Supported 00:20:33.247 00:20:33.247 Firmware Slot Information 00:20:33.247 ========================= 00:20:33.247 Active slot: 1 00:20:33.247 Slot 1 Firmware Revision: 24.09 00:20:33.247 00:20:33.247 00:20:33.247 Commands Supported and Effects 00:20:33.247 ============================== 00:20:33.247 Admin Commands 00:20:33.247 -------------- 00:20:33.247 Get Log Page (02h): Supported 00:20:33.247 Identify (06h): Supported 00:20:33.247 Abort (08h): Supported 00:20:33.247 Set Features (09h): Supported 00:20:33.247 Get Features (0Ah): Supported 00:20:33.247 Asynchronous Event Request (0Ch): Supported 00:20:33.247 Keep Alive (18h): Supported 00:20:33.247 I/O Commands 00:20:33.247 ------------ 00:20:33.247 Flush (00h): Supported LBA-Change 00:20:33.247 Write (01h): Supported LBA-Change 00:20:33.247 Read (02h): Supported 00:20:33.247 Compare (05h): Supported 00:20:33.247 Write Zeroes (08h): Supported LBA-Change 00:20:33.247 Dataset Management (09h): Supported LBA-Change 00:20:33.247 Copy (19h): Supported LBA-Change 00:20:33.247 00:20:33.247 Error Log 00:20:33.247 ========= 00:20:33.247 00:20:33.247 Arbitration 00:20:33.247 =========== 00:20:33.247 Arbitration Burst: 1 00:20:33.247 00:20:33.247 Power Management 00:20:33.247 ================ 00:20:33.247 Number of Power States: 1 00:20:33.247 Current Power State: Power State #0 00:20:33.247 Power State #0: 00:20:33.247 Max Power: 0.00 W 00:20:33.247 Non-Operational State: Operational 00:20:33.247 Entry Latency: Not Reported 00:20:33.247 Exit Latency: Not Reported 00:20:33.247 Relative Read Throughput: 0 00:20:33.247 Relative Read Latency: 0 00:20:33.247 Relative Write Throughput: 0 00:20:33.247 Relative Write Latency: 0 00:20:33.247 Idle Power: Not Reported 00:20:33.247 Active Power: Not Reported 00:20:33.247 Non-Operational Permissive Mode: Not Supported 00:20:33.247 00:20:33.247 Health Information 00:20:33.247 ================== 00:20:33.248 Critical Warnings: 00:20:33.248 Available Spare Space: OK 00:20:33.248 Temperature: OK 00:20:33.248 Device Reliability: OK 00:20:33.248 Read Only: No 00:20:33.248 Volatile Memory Backup: OK 00:20:33.248 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:33.248 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:33.248 Available Spare: 0% 00:20:33.248 Available Spare Threshold: 0% 00:20:33.248 Life Percentage Used:[2024-07-15 12:59:51.195163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.195176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.195187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.195219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6e40, cid 7, qid 0 00:20:33.248 [2024-07-15 12:59:51.195401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.195413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.195419] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.195425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6e40) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.195485] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:33.248 [2024-07-15 12:59:51.195508] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d63c0) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.195519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.248 [2024-07-15 12:59:51.195541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6540) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.195550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.248 [2024-07-15 12:59:51.195558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d66c0) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.195566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.248 [2024-07-15 12:59:51.195574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.195581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:33.248 [2024-07-15 12:59:51.195594] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.195602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.195609] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.195619] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.195643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.199749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.199766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.199773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.199780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.199792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.199799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.199806] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.199817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.199845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.199999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200011] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200018] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200046] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:33.248 [2024-07-15 12:59:51.200053] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:33.248 [2024-07-15 12:59:51.200069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.200112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.200202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200245] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200254] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.200289] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.200372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.200452] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.200540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200559] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200566] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.200625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.200704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.200817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.200919] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.200930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.200940] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200948] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.200964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.200980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.200990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.201010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.201112] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.201123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.201129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201135] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.201151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.201174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.201194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.201279] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.201292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.248 [2024-07-15 12:59:51.201298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.248 [2024-07-15 12:59:51.201320] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.248 [2024-07-15 12:59:51.201335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.248 [2024-07-15 12:59:51.201344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.248 [2024-07-15 12:59:51.201364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.248 [2024-07-15 12:59:51.201446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.248 [2024-07-15 12:59:51.201457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.201464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.201484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.201508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.201527] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.201611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.201622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.201628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.201653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.201677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.201696] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.201807] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.201821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.201827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.201850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.201866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.201876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.201897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.201987] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202008] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202051] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202067] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.202190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.202354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202368] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202380] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202400] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.202531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202581] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.202697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202748] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.202916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.202927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.202933] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.202955] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.202969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.202979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.202999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.203103] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.203117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.203123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.203146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.203174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.203194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.203276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.203287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.203293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.203315] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203323] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203329] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.203339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.203358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.203440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.203451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.203457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203463] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.203478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.203502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.203521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.203605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.249 [2024-07-15 12:59:51.203619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.249 [2024-07-15 12:59:51.203625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.249 [2024-07-15 12:59:51.203647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.249 [2024-07-15 12:59:51.203661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.249 [2024-07-15 12:59:51.203671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.249 [2024-07-15 12:59:51.203691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.249 [2024-07-15 12:59:51.207754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.250 [2024-07-15 12:59:51.207770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.250 [2024-07-15 12:59:51.207776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.250 [2024-07-15 12:59:51.207783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.250 [2024-07-15 12:59:51.207800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.250 [2024-07-15 12:59:51.207808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.250 [2024-07-15 12:59:51.207815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x576540) 00:20:33.250 [2024-07-15 12:59:51.207828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.250 [2024-07-15 12:59:51.207850] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5d6840, cid 3, qid 0 00:20:33.250 [2024-07-15 12:59:51.207993] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.250 [2024-07-15 12:59:51.208004] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.250 [2024-07-15 12:59:51.208011] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.250 [2024-07-15 12:59:51.208017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5d6840) on tqpair=0x576540 00:20:33.250 [2024-07-15 12:59:51.208045] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:33.250 0% 00:20:33.250 Data Units Read: 0 00:20:33.250 Data Units Written: 0 00:20:33.250 Host Read Commands: 0 00:20:33.250 Host Write Commands: 0 00:20:33.250 Controller Busy Time: 0 minutes 00:20:33.250 Power Cycles: 0 00:20:33.250 Power On Hours: 0 hours 00:20:33.250 Unsafe Shutdowns: 0 00:20:33.250 Unrecoverable Media Errors: 0 00:20:33.250 Lifetime Error Log Entries: 0 00:20:33.250 Warning Temperature Time: 0 minutes 00:20:33.250 Critical Temperature Time: 0 minutes 00:20:33.250 00:20:33.250 Number of Queues 00:20:33.250 ================ 00:20:33.250 Number of I/O Submission Queues: 127 00:20:33.250 Number of I/O Completion Queues: 127 00:20:33.250 00:20:33.250 Active Namespaces 00:20:33.250 ================= 00:20:33.250 Namespace ID:1 00:20:33.250 Error Recovery Timeout: Unlimited 00:20:33.250 Command Set Identifier: NVM (00h) 00:20:33.250 Deallocate: Supported 00:20:33.250 Deallocated/Unwritten Error: Not Supported 00:20:33.250 Deallocated Read Value: Unknown 00:20:33.250 Deallocate in Write Zeroes: Not Supported 00:20:33.250 Deallocated Guard Field: 0xFFFF 00:20:33.250 Flush: Supported 00:20:33.250 Reservation: Supported 00:20:33.250 Namespace Sharing Capabilities: Multiple Controllers 00:20:33.250 Size (in LBAs): 131072 (0GiB) 00:20:33.250 Capacity (in LBAs): 131072 (0GiB) 00:20:33.250 Utilization (in LBAs): 131072 (0GiB) 00:20:33.250 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:33.250 EUI64: ABCDEF0123456789 00:20:33.250 UUID: 4a5334dc-c27e-4728-9066-ae286e03590d 00:20:33.250 Thin Provisioning: Not Supported 00:20:33.250 Per-NS Atomic Units: Yes 00:20:33.250 Atomic Boundary Size (Normal): 0 00:20:33.250 Atomic Boundary Size (PFail): 0 00:20:33.250 Atomic Boundary Offset: 0 00:20:33.250 Maximum Single Source Range Length: 65535 00:20:33.250 Maximum Copy Length: 65535 00:20:33.250 Maximum Source Range Count: 1 00:20:33.250 NGUID/EUI64 Never Reused: No 00:20:33.250 Namespace Write Protected: No 00:20:33.250 Number of LBA Formats: 1 00:20:33.250 Current LBA Format: LBA Format #00 00:20:33.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:33.250 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.250 rmmod nvme_tcp 00:20:33.250 rmmod nvme_fabrics 00:20:33.250 rmmod nvme_keyring 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3446012 ']' 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3446012 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3446012 ']' 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3446012 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3446012 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3446012' 00:20:33.250 killing process with pid 3446012 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3446012 00:20:33.250 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3446012 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.510 12:59:51 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.041 12:59:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:36.041 00:20:36.041 real 0m5.577s 00:20:36.041 user 0m4.679s 00:20:36.041 sys 0m1.939s 00:20:36.041 12:59:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:36.041 12:59:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:36.041 ************************************ 00:20:36.041 END TEST nvmf_identify 00:20:36.041 ************************************ 00:20:36.041 12:59:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:36.041 12:59:53 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:36.041 12:59:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:36.041 12:59:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.041 12:59:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:36.041 ************************************ 00:20:36.041 START TEST nvmf_perf 00:20:36.041 ************************************ 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:36.041 * Looking for test storage... 00:20:36.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.041 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.042 12:59:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:37.943 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:37.943 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.943 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:37.944 Found net devices under 0000:84:00.0: cvl_0_0 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:37.944 Found net devices under 0000:84:00.1: cvl_0_1 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:37.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:20:37.944 00:20:37.944 --- 10.0.0.2 ping statistics --- 00:20:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.944 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:20:37.944 00:20:37.944 --- 10.0.0.1 ping statistics --- 00:20:37.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.944 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3448113 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3448113 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3448113 ']' 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.944 12:59:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:37.944 [2024-07-15 12:59:56.025653] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:37.944 [2024-07-15 12:59:56.025762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.944 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.944 [2024-07-15 12:59:56.090881] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.202 [2024-07-15 12:59:56.201693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.202 [2024-07-15 12:59:56.201769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.202 [2024-07-15 12:59:56.201797] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.202 [2024-07-15 12:59:56.201809] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.202 [2024-07-15 12:59:56.201819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.202 [2024-07-15 12:59:56.201868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.202 [2024-07-15 12:59:56.201924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.202 [2024-07-15 12:59:56.202339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.202 [2024-07-15 12:59:56.202351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.769 12:59:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.769 12:59:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:38.769 12:59:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.769 12:59:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.769 12:59:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:39.028 12:59:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.028 12:59:56 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:39.028 12:59:56 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:42.319 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:42.319 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:42.319 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:20:42.319 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:42.576 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:42.576 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:20:42.576 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:42.576 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:42.576 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:42.833 [2024-07-15 13:00:00.831899] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.833 13:00:00 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.090 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:43.090 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.347 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:43.347 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:43.603 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.859 [2024-07-15 13:00:01.871710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.859 13:00:01 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:44.115 13:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:20:44.115 13:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:44.115 13:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:44.115 13:00:02 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:20:45.492 Initializing NVMe Controllers 00:20:45.492 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:20:45.492 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:20:45.492 Initialization complete. Launching workers. 00:20:45.492 ======================================================== 00:20:45.492 Latency(us) 00:20:45.492 Device Information : IOPS MiB/s Average min max 00:20:45.492 PCIE (0000:82:00.0) NSID 1 from core 0: 83225.57 325.10 383.86 32.83 4990.59 00:20:45.492 ======================================================== 00:20:45.492 Total : 83225.57 325.10 383.86 32.83 4990.59 00:20:45.492 00:20:45.492 13:00:03 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.492 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.430 Initializing NVMe Controllers 00:20:46.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:46.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:46.430 Initialization complete. Launching workers. 00:20:46.430 ======================================================== 00:20:46.430 Latency(us) 00:20:46.430 Device Information : IOPS MiB/s Average min max 00:20:46.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 8987.75 157.99 45592.63 00:20:46.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36.00 0.14 27902.23 7960.03 47896.88 00:20:46.430 ======================================================== 00:20:46.430 Total : 149.00 0.58 13557.69 157.99 47896.88 00:20:46.430 00:20:46.430 13:00:04 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:46.430 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.802 Initializing NVMe Controllers 00:20:47.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.802 Initialization complete. Launching workers. 00:20:47.802 ======================================================== 00:20:47.802 Latency(us) 00:20:47.802 Device Information : IOPS MiB/s Average min max 00:20:47.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8443.00 32.98 3792.02 441.11 8356.35 00:20:47.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3851.00 15.04 8354.14 6044.76 16052.95 00:20:47.802 ======================================================== 00:20:47.802 Total : 12294.00 48.02 5221.07 441.11 16052.95 00:20:47.802 00:20:48.062 13:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:48.062 13:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:48.062 13:00:06 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.062 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.590 Initializing NVMe Controllers 00:20:50.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.590 Controller IO queue size 128, less than required. 00:20:50.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.590 Controller IO queue size 128, less than required. 00:20:50.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:50.590 Initialization complete. Launching workers. 00:20:50.590 ======================================================== 00:20:50.590 Latency(us) 00:20:50.590 Device Information : IOPS MiB/s Average min max 00:20:50.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1479.99 370.00 87941.43 61126.67 132710.56 00:20:50.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 617.00 154.25 221883.94 79230.23 352611.49 00:20:50.590 ======================================================== 00:20:50.590 Total : 2096.99 524.25 127351.32 61126.67 352611.49 00:20:50.590 00:20:50.590 13:00:08 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:50.590 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.847 No valid NVMe controllers or AIO or URING devices found 00:20:50.847 Initializing NVMe Controllers 00:20:50.847 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.847 Controller IO queue size 128, less than required. 00:20:50.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.847 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:50.847 Controller IO queue size 128, less than required. 00:20:50.847 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:50.847 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:50.847 WARNING: Some requested NVMe devices were skipped 00:20:50.847 13:00:08 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:50.847 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.448 Initializing NVMe Controllers 00:20:53.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.448 Controller IO queue size 128, less than required. 00:20:53.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.448 Controller IO queue size 128, less than required. 00:20:53.448 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.448 Initialization complete. Launching workers. 00:20:53.448 00:20:53.448 ==================== 00:20:53.448 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:53.448 TCP transport: 00:20:53.448 polls: 8461 00:20:53.448 idle_polls: 5522 00:20:53.448 sock_completions: 2939 00:20:53.448 nvme_completions: 5221 00:20:53.448 submitted_requests: 7816 00:20:53.448 queued_requests: 1 00:20:53.448 00:20:53.448 ==================== 00:20:53.448 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:53.448 TCP transport: 00:20:53.448 polls: 11644 00:20:53.448 idle_polls: 8257 00:20:53.448 sock_completions: 3387 00:20:53.448 nvme_completions: 5323 00:20:53.448 submitted_requests: 7936 00:20:53.448 queued_requests: 1 00:20:53.448 ======================================================== 00:20:53.448 Latency(us) 00:20:53.448 Device Information : IOPS MiB/s Average min max 00:20:53.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1304.26 326.06 100446.78 64757.81 158012.77 00:20:53.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1329.74 332.44 97288.53 47912.12 128654.60 00:20:53.448 ======================================================== 00:20:53.448 Total : 2634.00 658.50 98852.37 47912.12 158012.77 00:20:53.448 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.448 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.449 rmmod nvme_tcp 00:20:53.449 rmmod nvme_fabrics 00:20:53.449 rmmod nvme_keyring 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3448113 ']' 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3448113 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3448113 ']' 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3448113 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3448113 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3448113' 00:20:53.707 killing process with pid 3448113 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3448113 00:20:53.707 13:00:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3448113 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.610 13:00:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.611 13:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.611 13:00:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.517 13:00:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.517 00:20:57.517 real 0m21.729s 00:20:57.517 user 1m7.484s 00:20:57.517 sys 0m5.674s 00:20:57.517 13:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.517 13:00:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:57.517 ************************************ 00:20:57.517 END TEST nvmf_perf 00:20:57.517 ************************************ 00:20:57.517 13:00:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:57.517 13:00:15 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:57.517 13:00:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:57.517 13:00:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.517 13:00:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.517 ************************************ 00:20:57.517 START TEST nvmf_fio_host 00:20:57.517 ************************************ 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:57.517 * Looking for test storage... 00:20:57.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.517 13:00:15 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.518 13:00:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:59.424 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:59.424 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:59.424 Found net devices under 0000:84:00.0: cvl_0_0 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:59.424 Found net devices under 0000:84:00.1: cvl_0_1 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.424 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:20:59.683 00:20:59.683 --- 10.0.0.2 ping statistics --- 00:20:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.683 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:20:59.683 00:20:59.683 --- 10.0.0.1 ping statistics --- 00:20:59.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.683 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3452713 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3452713 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3452713 ']' 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:59.683 13:00:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.683 [2024-07-15 13:00:17.741111] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:20:59.683 [2024-07-15 13:00:17.741197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.683 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.683 [2024-07-15 13:00:17.801835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:59.941 [2024-07-15 13:00:17.905832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.941 [2024-07-15 13:00:17.905882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.941 [2024-07-15 13:00:17.905910] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.941 [2024-07-15 13:00:17.905921] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.941 [2024-07-15 13:00:17.905930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.941 [2024-07-15 13:00:17.906011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.941 [2024-07-15 13:00:17.906074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.941 [2024-07-15 13:00:17.906140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.941 [2024-07-15 13:00:17.906143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.941 13:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.941 13:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:59.941 13:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.199 [2024-07-15 13:00:18.260148] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.199 13:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:00.199 13:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.199 13:00:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.199 13:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:00.457 Malloc1 00:21:00.457 13:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.721 13:00:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:00.978 13:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.235 [2024-07-15 13:00:19.335646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.235 13:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:01.492 13:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:01.492 13:00:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.492 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:01.493 13:00:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.749 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:01.749 fio-3.35 00:21:01.749 Starting 1 thread 00:21:01.749 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.272 00:21:04.272 test: (groupid=0, jobs=1): err= 0: pid=3453069: Mon Jul 15 13:00:22 2024 00:21:04.272 read: IOPS=9180, BW=35.9MiB/s (37.6MB/s)(71.9MiB/2006msec) 00:21:04.272 slat (usec): min=2, max=151, avg= 3.15, stdev= 2.18 00:21:04.272 clat (usec): min=2511, max=12781, avg=7640.51, stdev=596.62 00:21:04.272 lat (usec): min=2538, max=12784, avg=7643.66, stdev=596.53 00:21:04.272 clat percentiles (usec): 00:21:04.272 | 1.00th=[ 6325], 5.00th=[ 6718], 10.00th=[ 6915], 20.00th=[ 7177], 00:21:04.272 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:21:04.272 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:21:04.272 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10421], 99.95th=[11338], 00:21:04.272 | 99.99th=[12780] 00:21:04.272 bw ( KiB/s): min=35912, max=37200, per=99.96%, avg=36706.00, stdev=560.09, samples=4 00:21:04.272 iops : min= 8978, max= 9300, avg=9176.50, stdev=140.02, samples=4 00:21:04.272 write: IOPS=9189, BW=35.9MiB/s (37.6MB/s)(72.0MiB/2006msec); 0 zone resets 00:21:04.272 slat (usec): min=2, max=133, avg= 3.28, stdev= 2.01 00:21:04.272 clat (usec): min=1355, max=12669, avg=6258.87, stdev=528.52 00:21:04.272 lat (usec): min=1365, max=12672, avg=6262.15, stdev=528.49 00:21:04.272 clat percentiles (usec): 00:21:04.272 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:21:04.272 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:21:04.272 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:21:04.272 | 99.00th=[ 7373], 99.50th=[ 7635], 99.90th=[10814], 99.95th=[11338], 00:21:04.272 | 99.99th=[12518] 00:21:04.272 bw ( KiB/s): min=36544, max=36864, per=99.95%, avg=36740.00, stdev=138.49, samples=4 00:21:04.272 iops : min= 9136, max= 9216, avg=9185.00, stdev=34.62, samples=4 00:21:04.272 lat (msec) : 2=0.03%, 4=0.11%, 10=99.75%, 20=0.12% 00:21:04.272 cpu : usr=69.98%, sys=27.93%, ctx=64, majf=0, minf=40 00:21:04.272 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:04.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.272 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.272 issued rwts: total=18416,18434,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.272 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.272 00:21:04.272 Run status group 0 (all jobs): 00:21:04.272 READ: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=71.9MiB (75.4MB), run=2006-2006msec 00:21:04.272 WRITE: bw=35.9MiB/s (37.6MB/s), 35.9MiB/s-35.9MiB/s (37.6MB/s-37.6MB/s), io=72.0MiB (75.5MB), run=2006-2006msec 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:04.272 13:00:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.272 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:04.272 fio-3.35 00:21:04.272 Starting 1 thread 00:21:04.272 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.798 00:21:06.798 test: (groupid=0, jobs=1): err= 0: pid=3453522: Mon Jul 15 13:00:24 2024 00:21:06.798 read: IOPS=8266, BW=129MiB/s (135MB/s)(259MiB/2007msec) 00:21:06.798 slat (usec): min=2, max=156, avg= 4.44, stdev= 2.95 00:21:06.798 clat (usec): min=2357, max=17181, avg=8997.79, stdev=2056.50 00:21:06.798 lat (usec): min=2361, max=17185, avg=9002.23, stdev=2056.57 00:21:06.798 clat percentiles (usec): 00:21:06.798 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:21:06.798 | 30.00th=[ 7832], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9503], 00:21:06.798 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12387], 00:21:06.798 | 99.00th=[14353], 99.50th=[15664], 99.90th=[16450], 99.95th=[16712], 00:21:06.798 | 99.99th=[17171] 00:21:06.798 bw ( KiB/s): min=56000, max=78944, per=51.02%, avg=67488.00, stdev=10440.54, samples=4 00:21:06.798 iops : min= 3500, max= 4934, avg=4218.00, stdev=652.53, samples=4 00:21:06.798 write: IOPS=4885, BW=76.3MiB/s (80.0MB/s)(138MiB/1810msec); 0 zone resets 00:21:06.798 slat (usec): min=30, max=200, avg=38.41, stdev= 6.91 00:21:06.798 clat (usec): min=7187, max=18601, avg=11419.92, stdev=1763.97 00:21:06.798 lat (usec): min=7224, max=18635, avg=11458.32, stdev=1763.90 00:21:06.798 clat percentiles (usec): 00:21:06.798 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:21:06.798 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:21:06.798 | 70.00th=[12256], 80.00th=[12911], 90.00th=[13829], 95.00th=[14484], 00:21:06.798 | 99.00th=[16057], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:21:06.798 | 99.99th=[18482] 00:21:06.798 bw ( KiB/s): min=58720, max=81920, per=90.01%, avg=70360.00, stdev=10742.52, samples=4 00:21:06.798 iops : min= 3670, max= 5120, avg=4397.50, stdev=671.41, samples=4 00:21:06.798 lat (msec) : 4=0.18%, 10=51.22%, 20=48.60% 00:21:06.798 cpu : usr=80.86%, sys=17.45%, ctx=56, majf=0, minf=74 00:21:06.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:06.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.798 issued rwts: total=16591,8843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.798 00:21:06.798 Run status group 0 (all jobs): 00:21:06.798 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (272MB), run=2007-2007msec 00:21:06.798 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=138MiB (145MB), run=1810-1810msec 00:21:06.798 13:00:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.058 rmmod nvme_tcp 00:21:07.058 rmmod nvme_fabrics 00:21:07.058 rmmod nvme_keyring 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3452713 ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3452713 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3452713 ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3452713 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3452713 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3452713' 00:21:07.058 killing process with pid 3452713 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3452713 00:21:07.058 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3452713 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.316 13:00:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.850 13:00:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:09.850 00:21:09.850 real 0m12.000s 00:21:09.850 user 0m35.619s 00:21:09.850 sys 0m3.750s 00:21:09.850 13:00:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:09.850 13:00:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.850 ************************************ 00:21:09.850 END TEST nvmf_fio_host 00:21:09.850 ************************************ 00:21:09.850 13:00:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:09.850 13:00:27 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.850 13:00:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:09.850 13:00:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:09.850 13:00:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:09.850 ************************************ 00:21:09.850 START TEST nvmf_failover 00:21:09.850 ************************************ 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.850 * Looking for test storage... 00:21:09.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:09.850 13:00:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:11.763 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:11.763 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:11.763 Found net devices under 0000:84:00.0: cvl_0_0 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:11.763 Found net devices under 0000:84:00.1: cvl_0_1 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:11.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:11.763 00:21:11.763 --- 10.0.0.2 ping statistics --- 00:21:11.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.763 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:21:11.763 00:21:11.763 --- 10.0.0.1 ping statistics --- 00:21:11.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.763 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3455733 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3455733 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3455733 ']' 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:11.763 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:11.763 [2024-07-15 13:00:29.721880] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:21:11.763 [2024-07-15 13:00:29.721967] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.763 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.763 [2024-07-15 13:00:29.783595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.763 [2024-07-15 13:00:29.887930] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.763 [2024-07-15 13:00:29.887981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.763 [2024-07-15 13:00:29.888011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.763 [2024-07-15 13:00:29.888022] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.763 [2024-07-15 13:00:29.888031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.763 [2024-07-15 13:00:29.888164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.763 [2024-07-15 13:00:29.888239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.763 [2024-07-15 13:00:29.888242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.022 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.022 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:12.022 13:00:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.022 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.022 13:00:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.022 13:00:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.022 13:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:12.279 [2024-07-15 13:00:30.236517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.279 13:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:12.536 Malloc0 00:21:12.536 13:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.794 13:00:30 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.051 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.308 [2024-07-15 13:00:31.264413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.308 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:13.308 [2024-07-15 13:00:31.513277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:13.565 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:13.565 [2024-07-15 13:00:31.754048] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3456017 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3456017 /var/tmp/bdevperf.sock 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3456017 ']' 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.826 13:00:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:14.086 13:00:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.086 13:00:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:14.086 13:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.343 NVMe0n1 00:21:14.343 13:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.601 00:21:14.601 13:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3456147 00:21:14.601 13:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.601 13:00:32 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:15.980 13:00:33 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.980 [2024-07-15 13:00:34.063210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f67550 is same with the state(5) to be set 00:21:15.980 13:00:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:19.266 13:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.266 00:21:19.266 13:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.523 13:00:37 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:22.840 13:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.840 [2024-07-15 13:00:40.971402] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.840 13:00:40 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:23.800 13:00:41 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:24.058 [2024-07-15 13:00:42.254134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254449] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.058 [2024-07-15 13:00:42.254566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2121ef0 is same with the state(5) to be set 00:21:24.315 13:00:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 3456147 00:21:30.914 0 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 3456017 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3456017 ']' 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3456017 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3456017 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3456017' 00:21:30.914 killing process with pid 3456017 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3456017 00:21:30.914 13:00:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3456017 00:21:30.914 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:30.914 [2024-07-15 13:00:31.815077] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:21:30.914 [2024-07-15 13:00:31.815192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456017 ] 00:21:30.914 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.914 [2024-07-15 13:00:31.878809] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.914 [2024-07-15 13:00:31.992650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.914 Running I/O for 15 seconds... 00:21:30.914 [2024-07-15 13:00:34.063615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.063979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.063995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.914 [2024-07-15 13:00:34.064386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.914 [2024-07-15 13:00:34.064695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.914 [2024-07-15 13:00:34.064708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.064763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.064793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.064827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.064856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.064886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.064915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.064944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.064973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.064988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.915 [2024-07-15 13:00:34.065617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.065975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.065989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.915 [2024-07-15 13:00:34.066005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.915 [2024-07-15 13:00:34.066018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.066978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.066993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.916 [2024-07-15 13:00:34.067263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.916 [2024-07-15 13:00:34.067276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:34.067518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.917 [2024-07-15 13:00:34.067547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfefc40 is same with the state(5) to be set 00:21:30.917 [2024-07-15 13:00:34.067579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.917 [2024-07-15 13:00:34.067592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.917 [2024-07-15 13:00:34.067604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:21:30.917 [2024-07-15 13:00:34.067616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067679] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfefc40 was disconnected and freed. reset controller. 00:21:30.917 [2024-07-15 13:00:34.067698] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:30.917 [2024-07-15 13:00:34.067734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.917 [2024-07-15 13:00:34.067759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.917 [2024-07-15 13:00:34.067789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.917 [2024-07-15 13:00:34.067821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.917 [2024-07-15 13:00:34.067853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:34.067867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.917 [2024-07-15 13:00:34.071114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.917 [2024-07-15 13:00:34.071152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9790 (9): Bad file descriptor 00:21:30.917 [2024-07-15 13:00:34.221302] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.917 [2024-07-15 13:00:37.693285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.693973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.693988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.694002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.694017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.694046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.694061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.694075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.694090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.694103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.694118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.917 [2024-07-15 13:00:37.694131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.917 [2024-07-15 13:00:37.694146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:119440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.694976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.694989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.918 [2024-07-15 13:00:37.695185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:119504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.918 [2024-07-15 13:00:37.695198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:119512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.919 [2024-07-15 13:00:37.695369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.695985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:119840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:119856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.919 [2024-07-15 13:00:37.696417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.919 [2024-07-15 13:00:37.696436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:119880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:119928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:119944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.920 [2024-07-15 13:00:37.696773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.696842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119960 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.696887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.696898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119968 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.696911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.696937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.696948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119976 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.696973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.696984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.696995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119984 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119992 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120000 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120008 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120016 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120024 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120032 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120040 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120048 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119560 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.920 [2024-07-15 13:00:37.697466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.920 [2024-07-15 13:00:37.697477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119568 len:8 PRP1 0x0 PRP2 0x0 00:21:30.920 [2024-07-15 13:00:37.697490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697556] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1194500 was disconnected and freed. reset controller. 00:21:30.920 [2024-07-15 13:00:37.697576] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:30.920 [2024-07-15 13:00:37.697613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.920 [2024-07-15 13:00:37.697631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.920 [2024-07-15 13:00:37.697662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.920 [2024-07-15 13:00:37.697705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.920 [2024-07-15 13:00:37.697733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.920 [2024-07-15 13:00:37.697757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.920 [2024-07-15 13:00:37.697801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9790 (9): Bad file descriptor 00:21:30.920 [2024-07-15 13:00:37.701037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.920 [2024-07-15 13:00:37.778570] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.920 [2024-07-15 13:00:42.255033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.920 [2024-07-15 13:00:42.255103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:71016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.255978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.255991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:71176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.921 [2024-07-15 13:00:42.256272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.921 [2024-07-15 13:00:42.256301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.921 [2024-07-15 13:00:42.256328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.921 [2024-07-15 13:00:42.256356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.921 [2024-07-15 13:00:42.256383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.921 [2024-07-15 13:00:42.256397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.921 [2024-07-15 13:00:42.256410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.256978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.256993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.922 [2024-07-15 13:00:42.257495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.922 [2024-07-15 13:00:42.257510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.257974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.257990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.923 [2024-07-15 13:00:42.258626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.923 [2024-07-15 13:00:42.258689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71840 len:8 PRP1 0x0 PRP2 0x0 00:21:30.923 [2024-07-15 13:00:42.258702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.923 [2024-07-15 13:00:42.258753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.923 [2024-07-15 13:00:42.258766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71848 len:8 PRP1 0x0 PRP2 0x0 00:21:30.923 [2024-07-15 13:00:42.258779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.923 [2024-07-15 13:00:42.258803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.923 [2024-07-15 13:00:42.258814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71856 len:8 PRP1 0x0 PRP2 0x0 00:21:30.923 [2024-07-15 13:00:42.258831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.923 [2024-07-15 13:00:42.258850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.258862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.258873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71864 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.258886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.258898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.258909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.258920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71872 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.258933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.258946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.258957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.258968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71880 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.258981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.258993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.259004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.259015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71888 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.259037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.259060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.259071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71184 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.259084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.924 [2024-07-15 13:00:42.259107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.924 [2024-07-15 13:00:42.259118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71192 len:8 PRP1 0x0 PRP2 0x0 00:21:30.924 [2024-07-15 13:00:42.259130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259190] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11942f0 was disconnected and freed. reset controller. 00:21:30.924 [2024-07-15 13:00:42.259209] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:30.924 [2024-07-15 13:00:42.259255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.924 [2024-07-15 13:00:42.259273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.924 [2024-07-15 13:00:42.259305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.924 [2024-07-15 13:00:42.259332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.924 [2024-07-15 13:00:42.259360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.924 [2024-07-15 13:00:42.259373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.924 [2024-07-15 13:00:42.259411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc9790 (9): Bad file descriptor 00:21:30.924 [2024-07-15 13:00:42.262677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.924 [2024-07-15 13:00:42.346210] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.924 00:21:30.924 Latency(us) 00:21:30.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.924 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.924 Verification LBA range: start 0x0 length 0x4000 00:21:30.924 NVMe0n1 : 15.01 8751.94 34.19 824.26 0.00 13341.00 509.72 18544.26 00:21:30.924 =================================================================================================================== 00:21:30.924 Total : 8751.94 34.19 824.26 0.00 13341.00 509.72 18544.26 00:21:30.924 Received shutdown signal, test time was about 15.000000 seconds 00:21:30.924 00:21:30.924 Latency(us) 00:21:30.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.924 =================================================================================================================== 00:21:30.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3457877 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3457877 /var/tmp/bdevperf.sock 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3457877 ']' 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:30.924 [2024-07-15 13:00:48.777723] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.924 13:00:48 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:30.924 [2024-07-15 13:00:49.058555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:30.924 13:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.489 NVMe0n1 00:21:31.489 13:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.747 00:21:31.747 13:00:49 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.312 00:21:32.312 13:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.312 13:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:32.312 13:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.570 13:00:50 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:35.856 13:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.856 13:00:53 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:35.856 13:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3458550 00:21:35.856 13:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.856 13:00:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 3458550 00:21:37.234 0 00:21:37.234 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.234 [2024-07-15 13:00:48.275588] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:21:37.234 [2024-07-15 13:00:48.275672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457877 ] 00:21:37.234 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.234 [2024-07-15 13:00:48.334081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.234 [2024-07-15 13:00:48.440916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.234 [2024-07-15 13:00:50.735710] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:37.234 [2024-07-15 13:00:50.735836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.234 [2024-07-15 13:00:50.735860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.234 [2024-07-15 13:00:50.735877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.234 [2024-07-15 13:00:50.735892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.234 [2024-07-15 13:00:50.735906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.234 [2024-07-15 13:00:50.735920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.234 [2024-07-15 13:00:50.735934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.234 [2024-07-15 13:00:50.735948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.234 [2024-07-15 13:00:50.735962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.234 [2024-07-15 13:00:50.736010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.234 [2024-07-15 13:00:50.736042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c5790 (9): Bad file descriptor 00:21:37.234 [2024-07-15 13:00:50.868902] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.234 Running I/O for 1 seconds... 00:21:37.234 00:21:37.234 Latency(us) 00:21:37.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.234 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:37.234 Verification LBA range: start 0x0 length 0x4000 00:21:37.234 NVMe0n1 : 1.01 8965.20 35.02 0.00 0.00 14220.89 2706.39 11747.93 00:21:37.234 =================================================================================================================== 00:21:37.234 Total : 8965.20 35.02 0.00 0.00 14220.89 2706.39 11747.93 00:21:37.234 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.234 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:37.492 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.492 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.492 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:37.751 13:00:55 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.010 13:00:56 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 3457877 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3457877 ']' 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3457877 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3457877 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3457877' 00:21:41.295 killing process with pid 3457877 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3457877 00:21:41.295 13:00:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3457877 00:21:41.553 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:41.553 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:41.811 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:41.811 13:00:59 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:41.811 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:41.811 rmmod nvme_tcp 00:21:42.069 rmmod nvme_fabrics 00:21:42.069 rmmod nvme_keyring 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3455733 ']' 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3455733 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3455733 ']' 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3455733 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3455733 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3455733' 00:21:42.069 killing process with pid 3455733 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3455733 00:21:42.069 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3455733 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:42.328 13:01:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.235 13:01:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.235 00:21:44.235 real 0m34.881s 00:21:44.235 user 2m2.807s 00:21:44.235 sys 0m6.212s 00:21:44.235 13:01:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.235 13:01:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.235 ************************************ 00:21:44.235 END TEST nvmf_failover 00:21:44.235 ************************************ 00:21:44.235 13:01:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:44.235 13:01:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.235 13:01:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:44.235 13:01:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.235 13:01:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:44.493 ************************************ 00:21:44.493 START TEST nvmf_host_discovery 00:21:44.493 ************************************ 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.493 * Looking for test storage... 00:21:44.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.493 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:44.494 13:01:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.394 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.394 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.394 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:46.395 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:46.395 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:46.395 Found net devices under 0000:84:00.0: cvl_0_0 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:46.395 Found net devices under 0000:84:00.1: cvl_0_1 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.395 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:46.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:21:46.653 00:21:46.653 --- 10.0.0.2 ping statistics --- 00:21:46.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.653 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:21:46.653 00:21:46.653 --- 10.0.0.1 ping statistics --- 00:21:46.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.653 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:46.653 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3461280 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3461280 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3461280 ']' 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.654 13:01:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.654 [2024-07-15 13:01:04.758033] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:21:46.654 [2024-07-15 13:01:04.758117] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.654 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.654 [2024-07-15 13:01:04.820365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.912 [2024-07-15 13:01:04.923003] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.912 [2024-07-15 13:01:04.923075] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.912 [2024-07-15 13:01:04.923103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.912 [2024-07-15 13:01:04.923114] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.912 [2024-07-15 13:01:04.923124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.912 [2024-07-15 13:01:04.923157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 [2024-07-15 13:01:05.058513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 [2024-07-15 13:01:05.066648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 null0 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 null1 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3461306 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3461306 /tmp/host.sock 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3461306 ']' 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:46.912 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.912 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.170 [2024-07-15 13:01:05.139858] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:21:47.170 [2024-07-15 13:01:05.139950] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461306 ] 00:21:47.170 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.170 [2024-07-15 13:01:05.196707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.170 [2024-07-15 13:01:05.302002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:47.429 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.430 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 [2024-07-15 13:01:05.668284] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.690 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:47.691 13:01:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:48.258 [2024-07-15 13:01:06.425774] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.258 [2024-07-15 13:01:06.425801] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.258 [2024-07-15 13:01:06.425825] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.518 [2024-07-15 13:01:06.512149] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:48.518 [2024-07-15 13:01:06.618599] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.518 [2024-07-15 13:01:06.618622] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:48.776 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.777 13:01:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.036 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.297 [2024-07-15 13:01:07.297195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.297 [2024-07-15 13:01:07.297671] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:49.297 [2024-07-15 13:01:07.297707] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.297 [2024-07-15 13:01:07.425583] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:49.297 13:01:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:49.555 [2024-07-15 13:01:07.694883] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.555 [2024-07-15 13:01:07.694915] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.556 [2024-07-15 13:01:07.694926] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 [2024-07-15 13:01:08.517402] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:50.518 [2024-07-15 13:01:08.517435] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.518 [2024-07-15 13:01:08.520755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.518 [2024-07-15 13:01:08.520787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.518 [2024-07-15 13:01:08.520820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.518 [2024-07-15 13:01:08.520834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.518 [2024-07-15 13:01:08.520848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.518 [2024-07-15 13:01:08.520862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.518 [2024-07-15 13:01:08.520875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.518 [2024-07-15 13:01:08.520889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.518 [2024-07-15 13:01:08.520902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.518 [2024-07-15 13:01:08.530749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.518 [2024-07-15 13:01:08.540799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.518 [2024-07-15 13:01:08.540993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.518 [2024-07-15 13:01:08.541046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.518 [2024-07-15 13:01:08.541063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.518 [2024-07-15 13:01:08.541084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.518 [2024-07-15 13:01:08.541106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.518 [2024-07-15 13:01:08.541119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.518 [2024-07-15 13:01:08.541133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.518 [2024-07-15 13:01:08.541152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.518 [2024-07-15 13:01:08.550890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.518 [2024-07-15 13:01:08.551125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.518 [2024-07-15 13:01:08.551151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.518 [2024-07-15 13:01:08.551166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.518 [2024-07-15 13:01:08.551188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.518 [2024-07-15 13:01:08.551221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.518 [2024-07-15 13:01:08.551238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.518 [2024-07-15 13:01:08.551250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.518 [2024-07-15 13:01:08.551269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.518 [2024-07-15 13:01:08.560988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.518 [2024-07-15 13:01:08.561224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:50.518 [2024-07-15 13:01:08.561250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.518 [2024-07-15 13:01:08.561266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.518 [2024-07-15 13:01:08.561286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.518 [2024-07-15 13:01:08.561312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.518 [2024-07-15 13:01:08.561326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.518 [2024-07-15 13:01:08.561339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.518 [2024-07-15 13:01:08.561358] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.518 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.519 [2024-07-15 13:01:08.571078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.519 [2024-07-15 13:01:08.571278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.519 [2024-07-15 13:01:08.571304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.519 [2024-07-15 13:01:08.571319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.519 [2024-07-15 13:01:08.571339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.519 [2024-07-15 13:01:08.571397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.519 [2024-07-15 13:01:08.571415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.519 [2024-07-15 13:01:08.571428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.519 [2024-07-15 13:01:08.571446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.519 [2024-07-15 13:01:08.581163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.519 [2024-07-15 13:01:08.581354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.519 [2024-07-15 13:01:08.581379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.519 [2024-07-15 13:01:08.581395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.519 [2024-07-15 13:01:08.581415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.519 [2024-07-15 13:01:08.581435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.519 [2024-07-15 13:01:08.581448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.519 [2024-07-15 13:01:08.581461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.519 [2024-07-15 13:01:08.581478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.519 [2024-07-15 13:01:08.591245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.519 [2024-07-15 13:01:08.591408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.519 [2024-07-15 13:01:08.591433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.519 [2024-07-15 13:01:08.591447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.519 [2024-07-15 13:01:08.591467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.519 [2024-07-15 13:01:08.591498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.519 [2024-07-15 13:01:08.591514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.519 [2024-07-15 13:01:08.591525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.519 [2024-07-15 13:01:08.591542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.519 [2024-07-15 13:01:08.601324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.519 [2024-07-15 13:01:08.601512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.519 [2024-07-15 13:01:08.601537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168210 with addr=10.0.0.2, port=4420 00:21:50.519 [2024-07-15 13:01:08.601551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2168210 is same with the state(5) to be set 00:21:50.519 [2024-07-15 13:01:08.601571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2168210 (9): Bad file descriptor 00:21:50.519 [2024-07-15 13:01:08.601590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:50.519 [2024-07-15 13:01:08.601602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:50.519 [2024-07-15 13:01:08.601614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:50.519 [2024-07-15 13:01:08.601631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.519 [2024-07-15 13:01:08.603104] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:50.519 [2024-07-15 13:01:08.603131] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:50.519 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:50.785 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:50.786 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.786 13:01:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.786 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.786 13:01:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.725 [2024-07-15 13:01:09.854969] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:51.725 [2024-07-15 13:01:09.854998] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:51.725 [2024-07-15 13:01:09.855023] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.986 [2024-07-15 13:01:09.941290] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:52.246 [2024-07-15 13:01:10.213249] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:52.246 [2024-07-15 13:01:10.213324] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.246 request: 00:21:52.246 { 00:21:52.246 "name": "nvme", 00:21:52.246 "trtype": "tcp", 00:21:52.246 "traddr": "10.0.0.2", 00:21:52.246 "adrfam": "ipv4", 00:21:52.246 "trsvcid": "8009", 00:21:52.246 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.246 "wait_for_attach": true, 00:21:52.246 "method": "bdev_nvme_start_discovery", 00:21:52.246 "req_id": 1 00:21:52.246 } 00:21:52.246 Got JSON-RPC error response 00:21:52.246 response: 00:21:52.246 { 00:21:52.246 "code": -17, 00:21:52.246 "message": "File exists" 00:21:52.246 } 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.246 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 request: 00:21:52.247 { 00:21:52.247 "name": "nvme_second", 00:21:52.247 "trtype": "tcp", 00:21:52.247 "traddr": "10.0.0.2", 00:21:52.247 "adrfam": "ipv4", 00:21:52.247 "trsvcid": "8009", 00:21:52.247 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.247 "wait_for_attach": true, 00:21:52.247 "method": "bdev_nvme_start_discovery", 00:21:52.247 "req_id": 1 00:21:52.247 } 00:21:52.247 Got JSON-RPC error response 00:21:52.247 response: 00:21:52.247 { 00:21:52.247 "code": -17, 00:21:52.247 "message": "File exists" 00:21:52.247 } 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.247 13:01:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.629 [2024-07-15 13:01:11.424877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:53.629 [2024-07-15 13:01:11.424966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b1070 with addr=10.0.0.2, port=8010 00:21:53.629 [2024-07-15 13:01:11.425001] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:53.629 [2024-07-15 13:01:11.425018] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:53.629 [2024-07-15 13:01:11.425046] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:54.565 [2024-07-15 13:01:12.427135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:54.565 [2024-07-15 13:01:12.427181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b1070 with addr=10.0.0.2, port=8010 00:21:54.565 [2024-07-15 13:01:12.427201] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:54.565 [2024-07-15 13:01:12.427213] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:54.565 [2024-07-15 13:01:12.427225] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.504 [2024-07-15 13:01:13.429370] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:55.504 request: 00:21:55.504 { 00:21:55.504 "name": "nvme_second", 00:21:55.504 "trtype": "tcp", 00:21:55.504 "traddr": "10.0.0.2", 00:21:55.504 "adrfam": "ipv4", 00:21:55.504 "trsvcid": "8010", 00:21:55.504 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:55.504 "wait_for_attach": false, 00:21:55.504 "attach_timeout_ms": 3000, 00:21:55.504 "method": "bdev_nvme_start_discovery", 00:21:55.504 "req_id": 1 00:21:55.504 } 00:21:55.504 Got JSON-RPC error response 00:21:55.504 response: 00:21:55.504 { 00:21:55.504 "code": -110, 00:21:55.504 "message": "Connection timed out" 00:21:55.504 } 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3461306 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.504 rmmod nvme_tcp 00:21:55.504 rmmod nvme_fabrics 00:21:55.504 rmmod nvme_keyring 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3461280 ']' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3461280 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3461280 ']' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3461280 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3461280 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3461280' 00:21:55.504 killing process with pid 3461280 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3461280 00:21:55.504 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3461280 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.762 13:01:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:58.296 00:21:58.296 real 0m13.441s 00:21:58.296 user 0m19.444s 00:21:58.296 sys 0m2.830s 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:58.296 ************************************ 00:21:58.296 END TEST nvmf_host_discovery 00:21:58.296 ************************************ 00:21:58.296 13:01:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:58.296 13:01:15 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:58.296 13:01:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:58.296 13:01:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:58.296 13:01:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.296 ************************************ 00:21:58.296 START TEST nvmf_host_multipath_status 00:21:58.296 ************************************ 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:58.296 * Looking for test storage... 00:21:58.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:58.296 13:01:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.296 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.297 13:01:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:00.203 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:00.203 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:00.203 Found net devices under 0000:84:00.0: cvl_0_0 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:00.203 Found net devices under 0000:84:00.1: cvl_0_1 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:00.203 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:00.204 13:01:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:22:00.204 00:22:00.204 --- 10.0.0.2 ping statistics --- 00:22:00.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.204 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:22:00.204 00:22:00.204 --- 10.0.0.1 ping statistics --- 00:22:00.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.204 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3464477 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3464477 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3464477 ']' 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.204 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.204 [2024-07-15 13:01:18.190389] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:22:00.204 [2024-07-15 13:01:18.190480] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.204 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.204 [2024-07-15 13:01:18.253361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:00.204 [2024-07-15 13:01:18.359144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.204 [2024-07-15 13:01:18.359199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.204 [2024-07-15 13:01:18.359222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.204 [2024-07-15 13:01:18.359233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.204 [2024-07-15 13:01:18.359243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.204 [2024-07-15 13:01:18.359336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.204 [2024-07-15 13:01:18.359341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.462 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3464477 00:22:00.463 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:00.720 [2024-07-15 13:01:18.706050] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.720 13:01:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:00.978 Malloc0 00:22:00.978 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:01.235 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:01.493 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:01.751 [2024-07-15 13:01:19.729706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.751 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.009 [2024-07-15 13:01:19.970398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3464644 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3464644 /var/tmp/bdevperf.sock 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3464644 ']' 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.009 13:01:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:02.267 13:01:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.267 13:01:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:02.268 13:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:02.526 13:01:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:03.094 Nvme0n1 00:22:03.094 13:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:03.662 Nvme0n1 00:22:03.662 13:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:03.662 13:01:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:05.567 13:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:05.567 13:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:05.824 13:01:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.082 13:01:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:07.015 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:07.015 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:07.015 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.016 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.273 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.273 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:07.273 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.273 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.531 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.531 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.531 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.531 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.095 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.095 13:01:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.095 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:08.661 13:01:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.227 13:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.227 13:01:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.600 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.858 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.858 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.858 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.858 13:01:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.116 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.116 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.116 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.116 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.373 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.373 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.373 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.373 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.631 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.631 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.631 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.631 13:01:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.198 13:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.199 13:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:12.199 13:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:12.199 13:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:12.768 13:01:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:13.704 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:13.704 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:13.704 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.704 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.962 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.962 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:13.962 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.962 13:01:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.233 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.233 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.233 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.233 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.550 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.550 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.550 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.550 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:14.835 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.835 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:14.835 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.835 13:01:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.093 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.093 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.093 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.093 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.351 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.351 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:15.351 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:15.609 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:15.869 13:01:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:16.801 13:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:16.801 13:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:16.801 13:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.801 13:01:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.058 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.058 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.058 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.058 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.316 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.316 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.316 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.316 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.884 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.884 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.884 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.884 13:01:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.884 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.884 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.884 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.884 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.144 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.144 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.403 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.403 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.659 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.660 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:18.660 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:18.919 13:01:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:19.179 13:01:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:20.116 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:20.116 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.116 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.116 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.372 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.372 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.372 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.372 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.630 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.630 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.630 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.630 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.886 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.886 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.886 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.886 13:01:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.144 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.144 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.144 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.144 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.402 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.402 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:21.402 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.402 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.659 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.659 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:21.659 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:21.917 13:01:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:22.177 13:01:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:23.111 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:23.111 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.111 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.111 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.368 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.368 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:23.368 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.368 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.626 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.626 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.626 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.626 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.884 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.884 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.884 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.884 13:01:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.142 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.142 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:24.142 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.142 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.400 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.400 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.400 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.400 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.658 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.658 13:01:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:24.916 13:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:24.916 13:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:25.489 13:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:25.489 13:01:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.867 13:01:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.125 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.125 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.125 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.125 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.383 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.383 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.383 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.383 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.641 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.641 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.641 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.641 13:01:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.205 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.205 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.205 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.205 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.462 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.462 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:28.462 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:28.719 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:28.976 13:01:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:29.914 13:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:29.914 13:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:29.914 13:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.914 13:01:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.170 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.170 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.170 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.170 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.428 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.428 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.428 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.428 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.684 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.684 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.685 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.685 13:01:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:30.942 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.942 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:30.942 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.942 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.199 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.199 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.199 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.199 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.457 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.457 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:31.457 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.023 13:01:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:32.023 13:01:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:33.397 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:33.397 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.398 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.670 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.671 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.671 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.671 13:01:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.929 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.929 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.929 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.929 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.187 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.187 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.187 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.187 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.445 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.445 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.445 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.445 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.012 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.012 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:35.012 13:01:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:35.012 13:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:35.599 13:01:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:36.538 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:36.538 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.538 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.538 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.796 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.796 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:36.796 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.796 13:01:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:37.054 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.054 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:37.054 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.054 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:37.312 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.312 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:37.312 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.312 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.570 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.570 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.570 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.570 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:37.828 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.828 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:37.828 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.828 13:01:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3464644 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3464644 ']' 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3464644 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:38.085 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464644 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464644' 00:22:38.086 killing process with pid 3464644 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3464644 00:22:38.086 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3464644 00:22:38.347 Connection closed with partial response: 00:22:38.347 00:22:38.347 00:22:38.347 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3464644 00:22:38.347 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:38.347 [2024-07-15 13:01:20.030840] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:22:38.347 [2024-07-15 13:01:20.030943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464644 ] 00:22:38.347 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.347 [2024-07-15 13:01:20.094610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.347 [2024-07-15 13:01:20.203539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.347 Running I/O for 90 seconds... 00:22:38.347 [2024-07-15 13:01:36.867494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.867947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.867963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:38.347 [2024-07-15 13:01:36.868828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.347 [2024-07-15 13:01:36.868845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.868867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.868884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.868906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.868921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.868943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.868959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.868982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.868998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:37264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:37312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.869996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:37384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:37392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:37400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.348 [2024-07-15 13:01:36.870243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.870978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.870995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.871022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.871038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.871084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.348 [2024-07-15 13:01:36.871101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:38.348 [2024-07-15 13:01:36.871127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.871950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.871982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:37416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.349 [2024-07-15 13:01:36.872276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.872979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.872996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.873025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.873041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.873086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.873103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:38.349 [2024-07-15 13:01:36.873131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.349 [2024-07-15 13:01:36.873147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:36.873828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.873876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.873923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.873970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.873999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.874015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.874044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.874076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.874105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.874122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:36.874150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:37472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:36.874167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.485915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.485987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:38.350 [2024-07-15 13:01:53.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.350 [2024-07-15 13:01:53.486843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:38.350 [2024-07-15 13:01:53.486865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.486881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.486904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.486919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.487963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:38.351 [2024-07-15 13:01:53.488476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.351 [2024-07-15 13:01:53.488491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.488513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.488530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.488552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.488567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.488590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.488606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:38.352 [2024-07-15 13:01:53.490538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:38.352 [2024-07-15 13:01:53.490554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:38.352 Received shutdown signal, test time was about 34.416485 seconds 00:22:38.352 00:22:38.352 Latency(us) 00:22:38.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.352 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:38.352 Verification LBA range: start 0x0 length 0x4000 00:22:38.352 Nvme0n1 : 34.42 8404.80 32.83 0.00 0.00 15205.58 282.17 4026531.84 00:22:38.352 =================================================================================================================== 00:22:38.352 Total : 8404.80 32.83 0.00 0.00 15205.58 282.17 4026531.84 00:22:38.352 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.609 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.609 rmmod nvme_tcp 00:22:38.609 rmmod nvme_fabrics 00:22:38.609 rmmod nvme_keyring 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3464477 ']' 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3464477 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3464477 ']' 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3464477 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.610 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3464477 00:22:38.867 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.867 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.867 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3464477' 00:22:38.867 killing process with pid 3464477 00:22:38.867 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3464477 00:22:38.867 13:01:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3464477 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.127 13:01:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.032 13:01:59 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:41.032 00:22:41.032 real 0m43.191s 00:22:41.032 user 2m10.888s 00:22:41.032 sys 0m11.781s 00:22:41.032 13:01:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.032 13:01:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 ************************************ 00:22:41.032 END TEST nvmf_host_multipath_status 00:22:41.032 ************************************ 00:22:41.032 13:01:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:41.032 13:01:59 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.032 13:01:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:41.032 13:01:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.032 13:01:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.032 ************************************ 00:22:41.032 START TEST nvmf_discovery_remove_ifc 00:22:41.032 ************************************ 00:22:41.033 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:41.033 * Looking for test storage... 00:22:41.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.292 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.293 13:01:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:43.196 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:43.196 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:43.196 Found net devices under 0000:84:00.0: cvl_0_0 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:43.196 Found net devices under 0000:84:00.1: cvl_0_1 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.196 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:22:43.455 00:22:43.455 --- 10.0.0.2 ping statistics --- 00:22:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.455 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:22:43.455 00:22:43.455 --- 10.0.0.1 ping statistics --- 00:22:43.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.455 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3471118 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3471118 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3471118 ']' 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.455 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.455 [2024-07-15 13:02:01.596246] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:22:43.455 [2024-07-15 13:02:01.596328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.455 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.455 [2024-07-15 13:02:01.658951] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.713 [2024-07-15 13:02:01.759627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.713 [2024-07-15 13:02:01.759683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.713 [2024-07-15 13:02:01.759714] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.713 [2024-07-15 13:02:01.759725] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.713 [2024-07-15 13:02:01.759735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.713 [2024-07-15 13:02:01.759786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.713 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.713 [2024-07-15 13:02:01.912448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.971 [2024-07-15 13:02:01.920652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:43.971 null0 00:22:43.971 [2024-07-15 13:02:01.952562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3471143 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3471143 /tmp/host.sock 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3471143 ']' 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:43.971 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.971 13:02:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.971 [2024-07-15 13:02:02.019362] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:22:43.971 [2024-07-15 13:02:02.019435] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471143 ] 00:22:43.971 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.971 [2024-07-15 13:02:02.079190] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.229 [2024-07-15 13:02:02.193440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.229 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.230 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:44.230 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.230 13:02:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.602 [2024-07-15 13:02:03.411489] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.602 [2024-07-15 13:02:03.411527] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.602 [2024-07-15 13:02:03.411550] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.602 [2024-07-15 13:02:03.539963] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:45.602 [2024-07-15 13:02:03.766128] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.602 [2024-07-15 13:02:03.766193] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.602 [2024-07-15 13:02:03.766229] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.602 [2024-07-15 13:02:03.766254] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.602 [2024-07-15 13:02:03.766294] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.602 [2024-07-15 13:02:03.769266] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22c5e00 was disconnected and freed. delete nvme_qpair. 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.602 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.859 13:02:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.790 13:02:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:48.161 13:02:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.094 13:02:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.094 13:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.094 13:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.094 13:02:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.024 13:02:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.954 13:02:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.219 [2024-07-15 13:02:09.207169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:51.219 [2024-07-15 13:02:09.207249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.219 [2024-07-15 13:02:09.207269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.219 [2024-07-15 13:02:09.207286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.219 [2024-07-15 13:02:09.207298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.219 [2024-07-15 13:02:09.207311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.219 [2024-07-15 13:02:09.207323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.219 [2024-07-15 13:02:09.207335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.219 [2024-07-15 13:02:09.207347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.219 [2024-07-15 13:02:09.207360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.219 [2024-07-15 13:02:09.207373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.219 [2024-07-15 13:02:09.207386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228c870 is same with the state(5) to be set 00:22:51.219 [2024-07-15 13:02:09.217185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228c870 (9): Bad file descriptor 00:22:51.219 [2024-07-15 13:02:09.227231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.150 [2024-07-15 13:02:10.247792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:52.150 [2024-07-15 13:02:10.247875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x228c870 with addr=10.0.0.2, port=4420 00:22:52.150 [2024-07-15 13:02:10.247912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228c870 is same with the state(5) to be set 00:22:52.150 [2024-07-15 13:02:10.247974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228c870 (9): Bad file descriptor 00:22:52.150 [2024-07-15 13:02:10.248434] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:52.150 [2024-07-15 13:02:10.248465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:52.150 [2024-07-15 13:02:10.248480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:52.150 [2024-07-15 13:02:10.248497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:52.150 [2024-07-15 13:02:10.248533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.150 [2024-07-15 13:02:10.248551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:52.150 13:02:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.083 [2024-07-15 13:02:11.251057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:53.083 [2024-07-15 13:02:11.251116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:53.083 [2024-07-15 13:02:11.251131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:53.083 [2024-07-15 13:02:11.251145] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:53.083 [2024-07-15 13:02:11.251174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.083 [2024-07-15 13:02:11.251217] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:53.083 [2024-07-15 13:02:11.251262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.083 [2024-07-15 13:02:11.251282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-07-15 13:02:11.251302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.083 [2024-07-15 13:02:11.251325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-07-15 13:02:11.251338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.083 [2024-07-15 13:02:11.251350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-07-15 13:02:11.251363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.083 [2024-07-15 13:02:11.251375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-07-15 13:02:11.251388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:53.083 [2024-07-15 13:02:11.251400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.083 [2024-07-15 13:02:11.251412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:53.083 [2024-07-15 13:02:11.251512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x228bcf0 (9): Bad file descriptor 00:22:53.083 [2024-07-15 13:02:11.252542] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:53.083 [2024-07-15 13:02:11.252564] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.083 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:53.341 13:02:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:54.281 13:02:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.221 [2024-07-15 13:02:13.307964] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:55.221 [2024-07-15 13:02:13.308010] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:55.221 [2024-07-15 13:02:13.308035] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:55.221 [2024-07-15 13:02:13.395310] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:55.480 13:02:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:55.480 [2024-07-15 13:02:13.618688] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:55.480 [2024-07-15 13:02:13.618770] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:55.480 [2024-07-15 13:02:13.618811] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:55.480 [2024-07-15 13:02:13.618835] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:55.480 [2024-07-15 13:02:13.618849] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:55.480 [2024-07-15 13:02:13.625295] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22cf800 was disconnected and freed. delete nvme_qpair. 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3471143 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3471143 ']' 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3471143 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471143 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471143' 00:22:56.416 killing process with pid 3471143 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3471143 00:22:56.416 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3471143 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.675 rmmod nvme_tcp 00:22:56.675 rmmod nvme_fabrics 00:22:56.675 rmmod nvme_keyring 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3471118 ']' 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3471118 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3471118 ']' 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3471118 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.675 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471118 00:22:56.934 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.934 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.934 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471118' 00:22:56.934 killing process with pid 3471118 00:22:56.934 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3471118 00:22:56.934 13:02:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3471118 00:22:57.193 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.193 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.194 13:02:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.161 13:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.161 00:22:59.161 real 0m18.011s 00:22:59.161 user 0m26.017s 00:22:59.161 sys 0m3.155s 00:22:59.161 13:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:59.161 13:02:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:59.161 ************************************ 00:22:59.161 END TEST nvmf_discovery_remove_ifc 00:22:59.161 ************************************ 00:22:59.161 13:02:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:59.161 13:02:17 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:59.161 13:02:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:59.161 13:02:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.161 13:02:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.161 ************************************ 00:22:59.161 START TEST nvmf_identify_kernel_target 00:22:59.161 ************************************ 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:59.161 * Looking for test storage... 00:22:59.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.161 13:02:17 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:01.697 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:01.697 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:01.697 Found net devices under 0000:84:00.0: cvl_0_0 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.697 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:01.698 Found net devices under 0000:84:00.1: cvl_0_1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:23:01.698 00:23:01.698 --- 10.0.0.2 ping statistics --- 00:23:01.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.698 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:01.698 00:23:01.698 --- 10.0.0.1 ping statistics --- 00:23:01.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.698 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:01.698 13:02:19 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:02.635 Waiting for block devices as requested 00:23:02.635 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:02.893 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:02.893 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:03.151 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:03.151 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:03.151 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:03.151 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:03.411 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:03.411 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:03.411 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:03.411 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:03.670 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:03.670 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:03.670 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:03.929 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:03.929 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:03.929 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:04.188 No valid GPT data, bailing 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:04.188 00:23:04.188 Discovery Log Number of Records 2, Generation counter 2 00:23:04.188 =====Discovery Log Entry 0====== 00:23:04.188 trtype: tcp 00:23:04.188 adrfam: ipv4 00:23:04.188 subtype: current discovery subsystem 00:23:04.188 treq: not specified, sq flow control disable supported 00:23:04.188 portid: 1 00:23:04.188 trsvcid: 4420 00:23:04.188 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:04.188 traddr: 10.0.0.1 00:23:04.188 eflags: none 00:23:04.188 sectype: none 00:23:04.188 =====Discovery Log Entry 1====== 00:23:04.188 trtype: tcp 00:23:04.188 adrfam: ipv4 00:23:04.188 subtype: nvme subsystem 00:23:04.188 treq: not specified, sq flow control disable supported 00:23:04.188 portid: 1 00:23:04.188 trsvcid: 4420 00:23:04.188 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:04.188 traddr: 10.0.0.1 00:23:04.188 eflags: none 00:23:04.188 sectype: none 00:23:04.188 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:04.188 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:04.188 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.448 ===================================================== 00:23:04.448 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:04.448 ===================================================== 00:23:04.448 Controller Capabilities/Features 00:23:04.448 ================================ 00:23:04.448 Vendor ID: 0000 00:23:04.448 Subsystem Vendor ID: 0000 00:23:04.448 Serial Number: 30a0941587864048f721 00:23:04.448 Model Number: Linux 00:23:04.448 Firmware Version: 6.7.0-68 00:23:04.448 Recommended Arb Burst: 0 00:23:04.448 IEEE OUI Identifier: 00 00 00 00:23:04.448 Multi-path I/O 00:23:04.448 May have multiple subsystem ports: No 00:23:04.448 May have multiple controllers: No 00:23:04.448 Associated with SR-IOV VF: No 00:23:04.448 Max Data Transfer Size: Unlimited 00:23:04.448 Max Number of Namespaces: 0 00:23:04.448 Max Number of I/O Queues: 1024 00:23:04.448 NVMe Specification Version (VS): 1.3 00:23:04.448 NVMe Specification Version (Identify): 1.3 00:23:04.448 Maximum Queue Entries: 1024 00:23:04.448 Contiguous Queues Required: No 00:23:04.448 Arbitration Mechanisms Supported 00:23:04.448 Weighted Round Robin: Not Supported 00:23:04.448 Vendor Specific: Not Supported 00:23:04.448 Reset Timeout: 7500 ms 00:23:04.448 Doorbell Stride: 4 bytes 00:23:04.448 NVM Subsystem Reset: Not Supported 00:23:04.448 Command Sets Supported 00:23:04.448 NVM Command Set: Supported 00:23:04.448 Boot Partition: Not Supported 00:23:04.448 Memory Page Size Minimum: 4096 bytes 00:23:04.448 Memory Page Size Maximum: 4096 bytes 00:23:04.448 Persistent Memory Region: Not Supported 00:23:04.448 Optional Asynchronous Events Supported 00:23:04.448 Namespace Attribute Notices: Not Supported 00:23:04.448 Firmware Activation Notices: Not Supported 00:23:04.448 ANA Change Notices: Not Supported 00:23:04.448 PLE Aggregate Log Change Notices: Not Supported 00:23:04.448 LBA Status Info Alert Notices: Not Supported 00:23:04.448 EGE Aggregate Log Change Notices: Not Supported 00:23:04.448 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.448 Zone Descriptor Change Notices: Not Supported 00:23:04.448 Discovery Log Change Notices: Supported 00:23:04.448 Controller Attributes 00:23:04.448 128-bit Host Identifier: Not Supported 00:23:04.448 Non-Operational Permissive Mode: Not Supported 00:23:04.448 NVM Sets: Not Supported 00:23:04.448 Read Recovery Levels: Not Supported 00:23:04.448 Endurance Groups: Not Supported 00:23:04.448 Predictable Latency Mode: Not Supported 00:23:04.448 Traffic Based Keep ALive: Not Supported 00:23:04.448 Namespace Granularity: Not Supported 00:23:04.448 SQ Associations: Not Supported 00:23:04.448 UUID List: Not Supported 00:23:04.448 Multi-Domain Subsystem: Not Supported 00:23:04.448 Fixed Capacity Management: Not Supported 00:23:04.448 Variable Capacity Management: Not Supported 00:23:04.448 Delete Endurance Group: Not Supported 00:23:04.448 Delete NVM Set: Not Supported 00:23:04.448 Extended LBA Formats Supported: Not Supported 00:23:04.448 Flexible Data Placement Supported: Not Supported 00:23:04.448 00:23:04.448 Controller Memory Buffer Support 00:23:04.448 ================================ 00:23:04.448 Supported: No 00:23:04.448 00:23:04.448 Persistent Memory Region Support 00:23:04.448 ================================ 00:23:04.448 Supported: No 00:23:04.448 00:23:04.448 Admin Command Set Attributes 00:23:04.448 ============================ 00:23:04.448 Security Send/Receive: Not Supported 00:23:04.448 Format NVM: Not Supported 00:23:04.448 Firmware Activate/Download: Not Supported 00:23:04.448 Namespace Management: Not Supported 00:23:04.448 Device Self-Test: Not Supported 00:23:04.448 Directives: Not Supported 00:23:04.448 NVMe-MI: Not Supported 00:23:04.448 Virtualization Management: Not Supported 00:23:04.448 Doorbell Buffer Config: Not Supported 00:23:04.448 Get LBA Status Capability: Not Supported 00:23:04.448 Command & Feature Lockdown Capability: Not Supported 00:23:04.448 Abort Command Limit: 1 00:23:04.448 Async Event Request Limit: 1 00:23:04.448 Number of Firmware Slots: N/A 00:23:04.448 Firmware Slot 1 Read-Only: N/A 00:23:04.448 Firmware Activation Without Reset: N/A 00:23:04.448 Multiple Update Detection Support: N/A 00:23:04.448 Firmware Update Granularity: No Information Provided 00:23:04.448 Per-Namespace SMART Log: No 00:23:04.448 Asymmetric Namespace Access Log Page: Not Supported 00:23:04.448 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:04.448 Command Effects Log Page: Not Supported 00:23:04.448 Get Log Page Extended Data: Supported 00:23:04.448 Telemetry Log Pages: Not Supported 00:23:04.448 Persistent Event Log Pages: Not Supported 00:23:04.448 Supported Log Pages Log Page: May Support 00:23:04.448 Commands Supported & Effects Log Page: Not Supported 00:23:04.448 Feature Identifiers & Effects Log Page:May Support 00:23:04.448 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.448 Data Area 4 for Telemetry Log: Not Supported 00:23:04.448 Error Log Page Entries Supported: 1 00:23:04.448 Keep Alive: Not Supported 00:23:04.448 00:23:04.448 NVM Command Set Attributes 00:23:04.448 ========================== 00:23:04.448 Submission Queue Entry Size 00:23:04.448 Max: 1 00:23:04.448 Min: 1 00:23:04.448 Completion Queue Entry Size 00:23:04.448 Max: 1 00:23:04.448 Min: 1 00:23:04.448 Number of Namespaces: 0 00:23:04.448 Compare Command: Not Supported 00:23:04.448 Write Uncorrectable Command: Not Supported 00:23:04.448 Dataset Management Command: Not Supported 00:23:04.448 Write Zeroes Command: Not Supported 00:23:04.448 Set Features Save Field: Not Supported 00:23:04.448 Reservations: Not Supported 00:23:04.448 Timestamp: Not Supported 00:23:04.448 Copy: Not Supported 00:23:04.448 Volatile Write Cache: Not Present 00:23:04.448 Atomic Write Unit (Normal): 1 00:23:04.448 Atomic Write Unit (PFail): 1 00:23:04.448 Atomic Compare & Write Unit: 1 00:23:04.448 Fused Compare & Write: Not Supported 00:23:04.448 Scatter-Gather List 00:23:04.448 SGL Command Set: Supported 00:23:04.448 SGL Keyed: Not Supported 00:23:04.448 SGL Bit Bucket Descriptor: Not Supported 00:23:04.448 SGL Metadata Pointer: Not Supported 00:23:04.448 Oversized SGL: Not Supported 00:23:04.448 SGL Metadata Address: Not Supported 00:23:04.448 SGL Offset: Supported 00:23:04.448 Transport SGL Data Block: Not Supported 00:23:04.448 Replay Protected Memory Block: Not Supported 00:23:04.448 00:23:04.448 Firmware Slot Information 00:23:04.448 ========================= 00:23:04.448 Active slot: 0 00:23:04.448 00:23:04.448 00:23:04.448 Error Log 00:23:04.448 ========= 00:23:04.448 00:23:04.448 Active Namespaces 00:23:04.448 ================= 00:23:04.448 Discovery Log Page 00:23:04.448 ================== 00:23:04.448 Generation Counter: 2 00:23:04.448 Number of Records: 2 00:23:04.448 Record Format: 0 00:23:04.448 00:23:04.448 Discovery Log Entry 0 00:23:04.448 ---------------------- 00:23:04.448 Transport Type: 3 (TCP) 00:23:04.448 Address Family: 1 (IPv4) 00:23:04.448 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:04.448 Entry Flags: 00:23:04.448 Duplicate Returned Information: 0 00:23:04.448 Explicit Persistent Connection Support for Discovery: 0 00:23:04.448 Transport Requirements: 00:23:04.448 Secure Channel: Not Specified 00:23:04.448 Port ID: 1 (0x0001) 00:23:04.448 Controller ID: 65535 (0xffff) 00:23:04.448 Admin Max SQ Size: 32 00:23:04.448 Transport Service Identifier: 4420 00:23:04.448 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:04.448 Transport Address: 10.0.0.1 00:23:04.448 Discovery Log Entry 1 00:23:04.448 ---------------------- 00:23:04.448 Transport Type: 3 (TCP) 00:23:04.448 Address Family: 1 (IPv4) 00:23:04.448 Subsystem Type: 2 (NVM Subsystem) 00:23:04.448 Entry Flags: 00:23:04.448 Duplicate Returned Information: 0 00:23:04.448 Explicit Persistent Connection Support for Discovery: 0 00:23:04.448 Transport Requirements: 00:23:04.448 Secure Channel: Not Specified 00:23:04.448 Port ID: 1 (0x0001) 00:23:04.448 Controller ID: 65535 (0xffff) 00:23:04.448 Admin Max SQ Size: 32 00:23:04.448 Transport Service Identifier: 4420 00:23:04.449 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:04.449 Transport Address: 10.0.0.1 00:23:04.449 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:04.449 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.449 get_feature(0x01) failed 00:23:04.449 get_feature(0x02) failed 00:23:04.449 get_feature(0x04) failed 00:23:04.449 ===================================================== 00:23:04.449 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:04.449 ===================================================== 00:23:04.449 Controller Capabilities/Features 00:23:04.449 ================================ 00:23:04.449 Vendor ID: 0000 00:23:04.449 Subsystem Vendor ID: 0000 00:23:04.449 Serial Number: 9a4d752da842e4a5195c 00:23:04.449 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:04.449 Firmware Version: 6.7.0-68 00:23:04.449 Recommended Arb Burst: 6 00:23:04.449 IEEE OUI Identifier: 00 00 00 00:23:04.449 Multi-path I/O 00:23:04.449 May have multiple subsystem ports: Yes 00:23:04.449 May have multiple controllers: Yes 00:23:04.449 Associated with SR-IOV VF: No 00:23:04.449 Max Data Transfer Size: Unlimited 00:23:04.449 Max Number of Namespaces: 1024 00:23:04.449 Max Number of I/O Queues: 128 00:23:04.449 NVMe Specification Version (VS): 1.3 00:23:04.449 NVMe Specification Version (Identify): 1.3 00:23:04.449 Maximum Queue Entries: 1024 00:23:04.449 Contiguous Queues Required: No 00:23:04.449 Arbitration Mechanisms Supported 00:23:04.449 Weighted Round Robin: Not Supported 00:23:04.449 Vendor Specific: Not Supported 00:23:04.449 Reset Timeout: 7500 ms 00:23:04.449 Doorbell Stride: 4 bytes 00:23:04.449 NVM Subsystem Reset: Not Supported 00:23:04.449 Command Sets Supported 00:23:04.449 NVM Command Set: Supported 00:23:04.449 Boot Partition: Not Supported 00:23:04.449 Memory Page Size Minimum: 4096 bytes 00:23:04.449 Memory Page Size Maximum: 4096 bytes 00:23:04.449 Persistent Memory Region: Not Supported 00:23:04.449 Optional Asynchronous Events Supported 00:23:04.449 Namespace Attribute Notices: Supported 00:23:04.449 Firmware Activation Notices: Not Supported 00:23:04.449 ANA Change Notices: Supported 00:23:04.449 PLE Aggregate Log Change Notices: Not Supported 00:23:04.449 LBA Status Info Alert Notices: Not Supported 00:23:04.449 EGE Aggregate Log Change Notices: Not Supported 00:23:04.449 Normal NVM Subsystem Shutdown event: Not Supported 00:23:04.449 Zone Descriptor Change Notices: Not Supported 00:23:04.449 Discovery Log Change Notices: Not Supported 00:23:04.449 Controller Attributes 00:23:04.449 128-bit Host Identifier: Supported 00:23:04.449 Non-Operational Permissive Mode: Not Supported 00:23:04.449 NVM Sets: Not Supported 00:23:04.449 Read Recovery Levels: Not Supported 00:23:04.449 Endurance Groups: Not Supported 00:23:04.449 Predictable Latency Mode: Not Supported 00:23:04.449 Traffic Based Keep ALive: Supported 00:23:04.449 Namespace Granularity: Not Supported 00:23:04.449 SQ Associations: Not Supported 00:23:04.449 UUID List: Not Supported 00:23:04.449 Multi-Domain Subsystem: Not Supported 00:23:04.449 Fixed Capacity Management: Not Supported 00:23:04.449 Variable Capacity Management: Not Supported 00:23:04.449 Delete Endurance Group: Not Supported 00:23:04.449 Delete NVM Set: Not Supported 00:23:04.449 Extended LBA Formats Supported: Not Supported 00:23:04.449 Flexible Data Placement Supported: Not Supported 00:23:04.449 00:23:04.449 Controller Memory Buffer Support 00:23:04.449 ================================ 00:23:04.449 Supported: No 00:23:04.449 00:23:04.449 Persistent Memory Region Support 00:23:04.449 ================================ 00:23:04.449 Supported: No 00:23:04.449 00:23:04.449 Admin Command Set Attributes 00:23:04.449 ============================ 00:23:04.449 Security Send/Receive: Not Supported 00:23:04.449 Format NVM: Not Supported 00:23:04.449 Firmware Activate/Download: Not Supported 00:23:04.449 Namespace Management: Not Supported 00:23:04.449 Device Self-Test: Not Supported 00:23:04.449 Directives: Not Supported 00:23:04.449 NVMe-MI: Not Supported 00:23:04.449 Virtualization Management: Not Supported 00:23:04.449 Doorbell Buffer Config: Not Supported 00:23:04.449 Get LBA Status Capability: Not Supported 00:23:04.449 Command & Feature Lockdown Capability: Not Supported 00:23:04.449 Abort Command Limit: 4 00:23:04.449 Async Event Request Limit: 4 00:23:04.449 Number of Firmware Slots: N/A 00:23:04.449 Firmware Slot 1 Read-Only: N/A 00:23:04.449 Firmware Activation Without Reset: N/A 00:23:04.449 Multiple Update Detection Support: N/A 00:23:04.449 Firmware Update Granularity: No Information Provided 00:23:04.449 Per-Namespace SMART Log: Yes 00:23:04.449 Asymmetric Namespace Access Log Page: Supported 00:23:04.449 ANA Transition Time : 10 sec 00:23:04.449 00:23:04.449 Asymmetric Namespace Access Capabilities 00:23:04.449 ANA Optimized State : Supported 00:23:04.449 ANA Non-Optimized State : Supported 00:23:04.449 ANA Inaccessible State : Supported 00:23:04.449 ANA Persistent Loss State : Supported 00:23:04.449 ANA Change State : Supported 00:23:04.449 ANAGRPID is not changed : No 00:23:04.449 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:04.449 00:23:04.449 ANA Group Identifier Maximum : 128 00:23:04.449 Number of ANA Group Identifiers : 128 00:23:04.449 Max Number of Allowed Namespaces : 1024 00:23:04.449 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:04.449 Command Effects Log Page: Supported 00:23:04.449 Get Log Page Extended Data: Supported 00:23:04.449 Telemetry Log Pages: Not Supported 00:23:04.449 Persistent Event Log Pages: Not Supported 00:23:04.449 Supported Log Pages Log Page: May Support 00:23:04.449 Commands Supported & Effects Log Page: Not Supported 00:23:04.449 Feature Identifiers & Effects Log Page:May Support 00:23:04.449 NVMe-MI Commands & Effects Log Page: May Support 00:23:04.449 Data Area 4 for Telemetry Log: Not Supported 00:23:04.449 Error Log Page Entries Supported: 128 00:23:04.449 Keep Alive: Supported 00:23:04.449 Keep Alive Granularity: 1000 ms 00:23:04.449 00:23:04.449 NVM Command Set Attributes 00:23:04.449 ========================== 00:23:04.449 Submission Queue Entry Size 00:23:04.449 Max: 64 00:23:04.449 Min: 64 00:23:04.449 Completion Queue Entry Size 00:23:04.449 Max: 16 00:23:04.449 Min: 16 00:23:04.449 Number of Namespaces: 1024 00:23:04.449 Compare Command: Not Supported 00:23:04.449 Write Uncorrectable Command: Not Supported 00:23:04.449 Dataset Management Command: Supported 00:23:04.449 Write Zeroes Command: Supported 00:23:04.449 Set Features Save Field: Not Supported 00:23:04.449 Reservations: Not Supported 00:23:04.449 Timestamp: Not Supported 00:23:04.449 Copy: Not Supported 00:23:04.449 Volatile Write Cache: Present 00:23:04.449 Atomic Write Unit (Normal): 1 00:23:04.449 Atomic Write Unit (PFail): 1 00:23:04.449 Atomic Compare & Write Unit: 1 00:23:04.449 Fused Compare & Write: Not Supported 00:23:04.449 Scatter-Gather List 00:23:04.449 SGL Command Set: Supported 00:23:04.449 SGL Keyed: Not Supported 00:23:04.449 SGL Bit Bucket Descriptor: Not Supported 00:23:04.449 SGL Metadata Pointer: Not Supported 00:23:04.449 Oversized SGL: Not Supported 00:23:04.449 SGL Metadata Address: Not Supported 00:23:04.449 SGL Offset: Supported 00:23:04.449 Transport SGL Data Block: Not Supported 00:23:04.449 Replay Protected Memory Block: Not Supported 00:23:04.449 00:23:04.449 Firmware Slot Information 00:23:04.449 ========================= 00:23:04.449 Active slot: 0 00:23:04.449 00:23:04.449 Asymmetric Namespace Access 00:23:04.449 =========================== 00:23:04.449 Change Count : 0 00:23:04.449 Number of ANA Group Descriptors : 1 00:23:04.449 ANA Group Descriptor : 0 00:23:04.449 ANA Group ID : 1 00:23:04.449 Number of NSID Values : 1 00:23:04.449 Change Count : 0 00:23:04.449 ANA State : 1 00:23:04.449 Namespace Identifier : 1 00:23:04.449 00:23:04.449 Commands Supported and Effects 00:23:04.449 ============================== 00:23:04.449 Admin Commands 00:23:04.449 -------------- 00:23:04.449 Get Log Page (02h): Supported 00:23:04.449 Identify (06h): Supported 00:23:04.449 Abort (08h): Supported 00:23:04.449 Set Features (09h): Supported 00:23:04.449 Get Features (0Ah): Supported 00:23:04.449 Asynchronous Event Request (0Ch): Supported 00:23:04.449 Keep Alive (18h): Supported 00:23:04.449 I/O Commands 00:23:04.449 ------------ 00:23:04.449 Flush (00h): Supported 00:23:04.449 Write (01h): Supported LBA-Change 00:23:04.449 Read (02h): Supported 00:23:04.449 Write Zeroes (08h): Supported LBA-Change 00:23:04.449 Dataset Management (09h): Supported 00:23:04.449 00:23:04.449 Error Log 00:23:04.449 ========= 00:23:04.449 Entry: 0 00:23:04.449 Error Count: 0x3 00:23:04.449 Submission Queue Id: 0x0 00:23:04.449 Command Id: 0x5 00:23:04.449 Phase Bit: 0 00:23:04.449 Status Code: 0x2 00:23:04.449 Status Code Type: 0x0 00:23:04.449 Do Not Retry: 1 00:23:04.449 Error Location: 0x28 00:23:04.449 LBA: 0x0 00:23:04.449 Namespace: 0x0 00:23:04.449 Vendor Log Page: 0x0 00:23:04.449 ----------- 00:23:04.449 Entry: 1 00:23:04.450 Error Count: 0x2 00:23:04.450 Submission Queue Id: 0x0 00:23:04.450 Command Id: 0x5 00:23:04.450 Phase Bit: 0 00:23:04.450 Status Code: 0x2 00:23:04.450 Status Code Type: 0x0 00:23:04.450 Do Not Retry: 1 00:23:04.450 Error Location: 0x28 00:23:04.450 LBA: 0x0 00:23:04.450 Namespace: 0x0 00:23:04.450 Vendor Log Page: 0x0 00:23:04.450 ----------- 00:23:04.450 Entry: 2 00:23:04.450 Error Count: 0x1 00:23:04.450 Submission Queue Id: 0x0 00:23:04.450 Command Id: 0x4 00:23:04.450 Phase Bit: 0 00:23:04.450 Status Code: 0x2 00:23:04.450 Status Code Type: 0x0 00:23:04.450 Do Not Retry: 1 00:23:04.450 Error Location: 0x28 00:23:04.450 LBA: 0x0 00:23:04.450 Namespace: 0x0 00:23:04.450 Vendor Log Page: 0x0 00:23:04.450 00:23:04.450 Number of Queues 00:23:04.450 ================ 00:23:04.450 Number of I/O Submission Queues: 128 00:23:04.450 Number of I/O Completion Queues: 128 00:23:04.450 00:23:04.450 ZNS Specific Controller Data 00:23:04.450 ============================ 00:23:04.450 Zone Append Size Limit: 0 00:23:04.450 00:23:04.450 00:23:04.450 Active Namespaces 00:23:04.450 ================= 00:23:04.450 get_feature(0x05) failed 00:23:04.450 Namespace ID:1 00:23:04.450 Command Set Identifier: NVM (00h) 00:23:04.450 Deallocate: Supported 00:23:04.450 Deallocated/Unwritten Error: Not Supported 00:23:04.450 Deallocated Read Value: Unknown 00:23:04.450 Deallocate in Write Zeroes: Not Supported 00:23:04.450 Deallocated Guard Field: 0xFFFF 00:23:04.450 Flush: Supported 00:23:04.450 Reservation: Not Supported 00:23:04.450 Namespace Sharing Capabilities: Multiple Controllers 00:23:04.450 Size (in LBAs): 1953525168 (931GiB) 00:23:04.450 Capacity (in LBAs): 1953525168 (931GiB) 00:23:04.450 Utilization (in LBAs): 1953525168 (931GiB) 00:23:04.450 UUID: 1d7d8609-745e-477f-a319-1d1333890fd7 00:23:04.450 Thin Provisioning: Not Supported 00:23:04.450 Per-NS Atomic Units: Yes 00:23:04.450 Atomic Boundary Size (Normal): 0 00:23:04.450 Atomic Boundary Size (PFail): 0 00:23:04.450 Atomic Boundary Offset: 0 00:23:04.450 NGUID/EUI64 Never Reused: No 00:23:04.450 ANA group ID: 1 00:23:04.450 Namespace Write Protected: No 00:23:04.450 Number of LBA Formats: 1 00:23:04.450 Current LBA Format: LBA Format #00 00:23:04.450 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:04.450 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:04.450 rmmod nvme_tcp 00:23:04.450 rmmod nvme_fabrics 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.450 13:02:22 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:06.985 13:02:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:07.919 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.919 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.919 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:08.853 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:08.853 00:23:08.853 real 0m9.777s 00:23:08.853 user 0m2.106s 00:23:08.853 sys 0m3.609s 00:23:08.853 13:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.853 13:02:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.853 ************************************ 00:23:08.853 END TEST nvmf_identify_kernel_target 00:23:08.853 ************************************ 00:23:08.853 13:02:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:08.853 13:02:27 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:08.853 13:02:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.853 13:02:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.853 13:02:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.112 ************************************ 00:23:09.112 START TEST nvmf_auth_host 00:23:09.112 ************************************ 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:09.112 * Looking for test storage... 00:23:09.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.112 13:02:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:11.014 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:11.014 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.014 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:11.015 Found net devices under 0000:84:00.0: cvl_0_0 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:11.015 Found net devices under 0000:84:00.1: cvl_0_1 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.015 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:23:11.273 00:23:11.273 --- 10.0.0.2 ping statistics --- 00:23:11.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.273 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:23:11.273 00:23:11.273 --- 10.0.0.1 ping statistics --- 00:23:11.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.273 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:23:11.273 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3478385 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3478385 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3478385 ']' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.274 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a19733de53cbfb063f2fc2650ee9ae16 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.iSB 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a19733de53cbfb063f2fc2650ee9ae16 0 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a19733de53cbfb063f2fc2650ee9ae16 0 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a19733de53cbfb063f2fc2650ee9ae16 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.532 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.iSB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.iSB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.iSB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8e87b86acc06fc8d213b3e4f6dc0475b31c3bf96b13563722933279871e0cf30 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.m6E 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8e87b86acc06fc8d213b3e4f6dc0475b31c3bf96b13563722933279871e0cf30 3 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8e87b86acc06fc8d213b3e4f6dc0475b31c3bf96b13563722933279871e0cf30 3 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8e87b86acc06fc8d213b3e4f6dc0475b31c3bf96b13563722933279871e0cf30 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.m6E 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.m6E 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m6E 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e3ba42607987fc4edb3eb1d7fc47ec8005b633f09ea58ec9 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Awj 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e3ba42607987fc4edb3eb1d7fc47ec8005b633f09ea58ec9 0 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e3ba42607987fc4edb3eb1d7fc47ec8005b633f09ea58ec9 0 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e3ba42607987fc4edb3eb1d7fc47ec8005b633f09ea58ec9 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Awj 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Awj 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Awj 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b615681b2f275d1928c4a7a6cc7060066826ede22d746d6 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.lBB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b615681b2f275d1928c4a7a6cc7060066826ede22d746d6 2 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b615681b2f275d1928c4a7a6cc7060066826ede22d746d6 2 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b615681b2f275d1928c4a7a6cc7060066826ede22d746d6 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.lBB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.lBB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.lBB 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.791 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=760eca5c7a0209282a578c5a932640f6 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0NA 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 760eca5c7a0209282a578c5a932640f6 1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 760eca5c7a0209282a578c5a932640f6 1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=760eca5c7a0209282a578c5a932640f6 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0NA 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0NA 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0NA 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a47adfcb08d5ffb7cb652b2eca52abed 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.y0k 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a47adfcb08d5ffb7cb652b2eca52abed 1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a47adfcb08d5ffb7cb652b2eca52abed 1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a47adfcb08d5ffb7cb652b2eca52abed 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.y0k 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.y0k 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.y0k 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d322342f1cfc62f8c45df11d939bb7995df283df11914c4a 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.N1H 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d322342f1cfc62f8c45df11d939bb7995df283df11914c4a 2 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d322342f1cfc62f8c45df11d939bb7995df283df11914c4a 2 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d322342f1cfc62f8c45df11d939bb7995df283df11914c4a 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.792 13:02:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.N1H 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.N1H 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.N1H 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=769b3e4d3bc001120f520d5134191edb 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Y5a 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 769b3e4d3bc001120f520d5134191edb 0 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 769b3e4d3bc001120f520d5134191edb 0 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=769b3e4d3bc001120f520d5134191edb 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Y5a 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Y5a 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Y5a 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=775515576c6b19672135c18bb7f57460442f2ae1ad377cb1bf34084d8281774e 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.we5 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 775515576c6b19672135c18bb7f57460442f2ae1ad377cb1bf34084d8281774e 3 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 775515576c6b19672135c18bb7f57460442f2ae1ad377cb1bf34084d8281774e 3 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=775515576c6b19672135c18bb7f57460442f2ae1ad377cb1bf34084d8281774e 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.we5 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.we5 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.we5 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3478385 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3478385 ']' 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:12.050 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.051 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:12.051 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iSB 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m6E ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m6E 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Awj 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.lBB ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.lBB 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0NA 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.y0k ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.y0k 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.N1H 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Y5a ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Y5a 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.we5 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:12.308 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:12.309 13:02:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:13.686 Waiting for block devices as requested 00:23:13.686 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:23:13.686 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:13.686 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.998 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.998 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.998 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.998 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.998 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:14.256 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:14.256 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:14.256 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:14.256 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:14.513 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:14.513 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:14.513 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:14.513 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:14.770 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:15.028 13:02:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:15.286 No valid GPT data, bailing 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:23:15.286 00:23:15.286 Discovery Log Number of Records 2, Generation counter 2 00:23:15.286 =====Discovery Log Entry 0====== 00:23:15.286 trtype: tcp 00:23:15.286 adrfam: ipv4 00:23:15.286 subtype: current discovery subsystem 00:23:15.286 treq: not specified, sq flow control disable supported 00:23:15.286 portid: 1 00:23:15.286 trsvcid: 4420 00:23:15.286 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:15.286 traddr: 10.0.0.1 00:23:15.286 eflags: none 00:23:15.286 sectype: none 00:23:15.286 =====Discovery Log Entry 1====== 00:23:15.286 trtype: tcp 00:23:15.286 adrfam: ipv4 00:23:15.286 subtype: nvme subsystem 00:23:15.286 treq: not specified, sq flow control disable supported 00:23:15.286 portid: 1 00:23:15.286 trsvcid: 4420 00:23:15.286 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:15.286 traddr: 10.0.0.1 00:23:15.286 eflags: none 00:23:15.286 sectype: none 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.286 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 nvme0n1 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:15.543 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.544 nvme0n1 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.544 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 nvme0n1 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 13:02:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:15.802 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 nvme0n1 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 nvme0n1 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.581 nvme0n1 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.581 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.838 nvme0n1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.838 13:02:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.098 nvme0n1 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.098 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.356 nvme0n1 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.356 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.357 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.357 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.357 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.357 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.357 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.615 nvme0n1 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.615 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.616 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.876 nvme0n1 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.876 13:02:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.876 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.446 nvme0n1 00:23:18.446 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.446 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.446 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.446 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.447 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.706 nvme0n1 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.706 13:02:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.272 nvme0n1 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:19.272 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.273 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.531 nvme0n1 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.531 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.790 nvme0n1 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.790 13:02:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.360 nvme0n1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.360 13:02:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.928 nvme0n1 00:23:20.928 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.928 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.928 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.928 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.928 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.187 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.188 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.759 nvme0n1 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.759 13:02:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.353 nvme0n1 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.353 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.354 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.922 nvme0n1 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.922 13:02:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.922 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 nvme0n1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.857 13:02:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.792 nvme0n1 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.792 13:02:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.725 nvme0n1 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.725 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.726 13:02:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.662 nvme0n1 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.662 13:02:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.920 13:02:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.920 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.920 13:02:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 nvme0n1 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 nvme0n1 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.858 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 nvme0n1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.117 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.376 nvme0n1 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.376 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.634 nvme0n1 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.634 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.635 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.635 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.635 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.635 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.894 nvme0n1 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.894 13:02:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.895 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.895 13:02:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.153 nvme0n1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.153 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.412 nvme0n1 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:29.412 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.413 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.672 nvme0n1 00:23:29.672 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.672 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.672 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.672 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.673 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.933 nvme0n1 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.933 13:02:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.933 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.191 nvme0n1 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.191 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.449 nvme0n1 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.449 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.707 13:02:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 nvme0n1 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.965 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.223 nvme0n1 00:23:31.224 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.224 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.482 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.483 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.740 nvme0n1 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.740 13:02:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 nvme0n1 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.306 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.870 nvme0n1 00:23:32.870 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.871 13:02:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 nvme0n1 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.439 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.698 13:02:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.267 nvme0n1 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.267 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.268 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.268 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.268 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.836 nvme0n1 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.836 13:02:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.406 nvme0n1 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.406 13:02:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.783 nvme0n1 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.783 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.784 13:02:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.720 nvme0n1 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.720 13:02:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.660 nvme0n1 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.660 13:02:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 nvme0n1 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.595 13:02:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.531 nvme0n1 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.531 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 nvme0n1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 nvme0n1 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 13:02:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.048 13:02:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:41.048 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 nvme0n1 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.049 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.307 nvme0n1 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.307 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.308 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:41.308 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.308 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.308 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.566 nvme0n1 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.566 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.567 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.825 nvme0n1 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.825 13:02:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.826 13:02:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.826 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.826 13:02:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.084 nvme0n1 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.084 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.343 nvme0n1 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.343 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.603 nvme0n1 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.603 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.862 nvme0n1 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.862 13:03:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.862 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.863 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.121 nvme0n1 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.121 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.384 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.718 nvme0n1 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.718 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.998 nvme0n1 00:23:43.998 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.998 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.998 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.998 13:03:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.998 13:03:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.998 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.999 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.258 nvme0n1 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.258 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.827 nvme0n1 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.827 13:03:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.394 nvme0n1 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.394 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.395 13:03:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.964 nvme0n1 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.964 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.965 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.531 nvme0n1 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.531 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.789 13:03:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.355 nvme0n1 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.355 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.923 nvme0n1 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YTE5NzMzZGU1M2NiZmIwNjNmMmZjMjY1MGVlOWFlMTZEkWKa: 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OGU4N2I4NmFjYzA2ZmM4ZDIxM2IzZTRmNmRjMDQ3NWIzMWMzYmY5NmIxMzU2MzcyMjkzMzI3OTg3MWUwY2YzML7l/Y0=: 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.923 13:03:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.857 nvme0n1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.857 13:03:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.797 nvme0n1 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NzYwZWNhNWM3YTAyMDkyODJhNTc4YzVhOTMyNjQwZjZWDME7: 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQ3YWRmY2IwOGQ1ZmZiN2NiNjUyYjJlY2E1MmFiZWSrnLN7: 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.797 13:03:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.733 nvme0n1 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDMyMjM0MmYxY2ZjNjJmOGM0NWRmMTFkOTM5YmI3OTk1ZGYyODNkZjExOTE0YzRhfIxkCQ==: 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: ]] 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzY5YjNlNGQzYmMwMDExMjBmNTIwZDUxMzQxOTFlZGIxtaGM: 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.733 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.991 13:03:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.927 nvme0n1 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Nzc1NTE1NTc2YzZiMTk2NzIxMzVjMThiYjdmNTc0NjA0NDJmMmFlMWFkMzc3Y2IxYmYzNDA4NGQ4MjgxNzc0ZS07S7E=: 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.927 13:03:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.873 nvme0n1 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTNiYTQyNjA3OTg3ZmM0ZWRiM2ViMWQ3ZmM0N2VjODAwNWI2MzNmMDllYTU4ZWM5HgBrFg==: 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI2MTU2ODFiMmYyNzVkMTkyOGM0YTdhNmNjNzA2MDA2NjgyNmVkZTIyZDc0NmQ2UFeU0g==: 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.873 request: 00:23:52.873 { 00:23:52.873 "name": "nvme0", 00:23:52.873 "trtype": "tcp", 00:23:52.873 "traddr": "10.0.0.1", 00:23:52.873 "adrfam": "ipv4", 00:23:52.873 "trsvcid": "4420", 00:23:52.873 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.873 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.873 "prchk_reftag": false, 00:23:52.873 "prchk_guard": false, 00:23:52.873 "hdgst": false, 00:23:52.873 "ddgst": false, 00:23:52.873 "method": "bdev_nvme_attach_controller", 00:23:52.873 "req_id": 1 00:23:52.873 } 00:23:52.873 Got JSON-RPC error response 00:23:52.873 response: 00:23:52.873 { 00:23:52.873 "code": -5, 00:23:52.873 "message": "Input/output error" 00:23:52.873 } 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.873 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.874 13:03:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.874 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.134 request: 00:23:53.134 { 00:23:53.134 "name": "nvme0", 00:23:53.134 "trtype": "tcp", 00:23:53.134 "traddr": "10.0.0.1", 00:23:53.134 "adrfam": "ipv4", 00:23:53.134 "trsvcid": "4420", 00:23:53.134 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.134 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.134 "prchk_reftag": false, 00:23:53.134 "prchk_guard": false, 00:23:53.134 "hdgst": false, 00:23:53.134 "ddgst": false, 00:23:53.134 "dhchap_key": "key2", 00:23:53.134 "method": "bdev_nvme_attach_controller", 00:23:53.134 "req_id": 1 00:23:53.134 } 00:23:53.134 Got JSON-RPC error response 00:23:53.134 response: 00:23:53.134 { 00:23:53.134 "code": -5, 00:23:53.134 "message": "Input/output error" 00:23:53.134 } 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.134 request: 00:23:53.134 { 00:23:53.134 "name": "nvme0", 00:23:53.134 "trtype": "tcp", 00:23:53.134 "traddr": "10.0.0.1", 00:23:53.134 "adrfam": "ipv4", 00:23:53.134 "trsvcid": "4420", 00:23:53.134 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.134 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.134 "prchk_reftag": false, 00:23:53.134 "prchk_guard": false, 00:23:53.134 "hdgst": false, 00:23:53.134 "ddgst": false, 00:23:53.134 "dhchap_key": "key1", 00:23:53.134 "dhchap_ctrlr_key": "ckey2", 00:23:53.134 "method": "bdev_nvme_attach_controller", 00:23:53.134 "req_id": 1 00:23:53.134 } 00:23:53.134 Got JSON-RPC error response 00:23:53.134 response: 00:23:53.134 { 00:23:53.134 "code": -5, 00:23:53.134 "message": "Input/output error" 00:23:53.134 } 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.134 rmmod nvme_tcp 00:23:53.134 rmmod nvme_fabrics 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3478385 ']' 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3478385 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3478385 ']' 00:23:53.134 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3478385 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3478385 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3478385' 00:23:53.135 killing process with pid 3478385 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3478385 00:23:53.135 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3478385 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.394 13:03:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:55.923 13:03:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.858 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.858 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.858 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.858 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.859 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.859 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.859 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.859 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:56.859 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:57.797 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:23:58.056 13:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.iSB /tmp/spdk.key-null.Awj /tmp/spdk.key-sha256.0NA /tmp/spdk.key-sha384.N1H /tmp/spdk.key-sha512.we5 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:58.056 13:03:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:58.988 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:58.988 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:58.988 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.247 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.247 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.247 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.247 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.247 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.247 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.247 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:59.247 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:59.247 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:59.247 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:59.247 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:59.247 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:59.247 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:59.247 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:59.247 00:23:59.247 real 0m50.326s 00:23:59.247 user 0m47.734s 00:23:59.247 sys 0m6.019s 00:23:59.247 13:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:59.247 13:03:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.247 ************************************ 00:23:59.247 END TEST nvmf_auth_host 00:23:59.247 ************************************ 00:23:59.247 13:03:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:59.247 13:03:17 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:23:59.247 13:03:17 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.247 13:03:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:59.247 13:03:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.247 13:03:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.247 ************************************ 00:23:59.247 START TEST nvmf_digest 00:23:59.247 ************************************ 00:23:59.247 13:03:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:59.506 * Looking for test storage... 00:23:59.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.506 13:03:17 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.507 13:03:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.404 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:01.405 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:01.405 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:01.405 Found net devices under 0000:84:00.0: cvl_0_0 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:01.405 Found net devices under 0000:84:00.1: cvl_0_1 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.405 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:01.664 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.664 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:24:01.664 00:24:01.664 --- 10.0.0.2 ping statistics --- 00:24:01.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.664 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.664 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.664 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:24:01.664 00:24:01.664 --- 10.0.0.1 ping statistics --- 00:24:01.664 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.664 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.664 ************************************ 00:24:01.664 START TEST nvmf_digest_clean 00:24:01.664 ************************************ 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3488613 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3488613 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3488613 ']' 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.664 13:03:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.664 [2024-07-15 13:03:19.781326] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:01.664 [2024-07-15 13:03:19.781423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.664 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.664 [2024-07-15 13:03:19.854373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.922 [2024-07-15 13:03:19.962658] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.922 [2024-07-15 13:03:19.962719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.922 [2024-07-15 13:03:19.962733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.922 [2024-07-15 13:03:19.962766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.922 [2024-07-15 13:03:19.962778] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.922 [2024-07-15 13:03:19.962805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.856 null0 00:24:02.856 [2024-07-15 13:03:20.855211] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.856 [2024-07-15 13:03:20.879393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3488768 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3488768 /var/tmp/bperf.sock 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3488768 ']' 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:02.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.856 13:03:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.857 [2024-07-15 13:03:20.924872] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:02.857 [2024-07-15 13:03:20.924941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3488768 ] 00:24:02.857 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.857 [2024-07-15 13:03:20.984213] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.114 [2024-07-15 13:03:21.090515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:03.114 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:03.114 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:03.114 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:03.114 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:03.114 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:03.371 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.371 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.629 nvme0n1 00:24:03.629 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:03.629 13:03:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:03.888 Running I/O for 2 seconds... 00:24:05.793 00:24:05.793 Latency(us) 00:24:05.793 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:05.793 nvme0n1 : 2.00 20905.46 81.66 0.00 0.00 6115.57 3106.89 13689.74 00:24:05.793 =================================================================================================================== 00:24:05.793 Total : 20905.46 81.66 0.00 0.00 6115.57 3106.89 13689.74 00:24:05.793 0 00:24:05.793 13:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:05.793 13:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:05.793 13:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:05.793 13:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:05.793 | select(.opcode=="crc32c") 00:24:05.793 | "\(.module_name) \(.executed)"' 00:24:05.793 13:03:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3488768 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3488768 ']' 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3488768 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3488768 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3488768' 00:24:06.052 killing process with pid 3488768 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3488768 00:24:06.052 Received shutdown signal, test time was about 2.000000 seconds 00:24:06.052 00:24:06.052 Latency(us) 00:24:06.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.052 =================================================================================================================== 00:24:06.052 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.052 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3488768 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3489180 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3489180 /var/tmp/bperf.sock 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3489180 ']' 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:06.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:06.310 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.569 [2024-07-15 13:03:24.535793] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:06.569 [2024-07-15 13:03:24.535886] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489180 ] 00:24:06.569 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.569 Zero copy mechanism will not be used. 00:24:06.569 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.569 [2024-07-15 13:03:24.594161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.569 [2024-07-15 13:03:24.699050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.569 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.569 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:06.569 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:06.569 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:06.569 13:03:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:07.148 13:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.148 13:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.460 nvme0n1 00:24:07.460 13:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:07.460 13:03:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.744 Zero copy mechanism will not be used. 00:24:07.744 Running I/O for 2 seconds... 00:24:09.646 00:24:09.646 Latency(us) 00:24:09.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.646 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:09.646 nvme0n1 : 2.00 4439.63 554.95 0.00 0.00 3600.25 737.28 5534.15 00:24:09.646 =================================================================================================================== 00:24:09.646 Total : 4439.63 554.95 0.00 0.00 3600.25 737.28 5534.15 00:24:09.646 0 00:24:09.646 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:09.646 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:09.646 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:09.646 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:09.646 | select(.opcode=="crc32c") 00:24:09.646 | "\(.module_name) \(.executed)"' 00:24:09.646 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:09.905 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:09.905 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3489180 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3489180 ']' 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3489180 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3489180 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3489180' 00:24:09.906 killing process with pid 3489180 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3489180 00:24:09.906 Received shutdown signal, test time was about 2.000000 seconds 00:24:09.906 00:24:09.906 Latency(us) 00:24:09.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.906 =================================================================================================================== 00:24:09.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.906 13:03:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3489180 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3489587 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3489587 /var/tmp/bperf.sock 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3489587 ']' 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:10.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.171 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.171 [2024-07-15 13:03:28.279683] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:10.171 [2024-07-15 13:03:28.279778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489587 ] 00:24:10.171 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.171 [2024-07-15 13:03:28.340135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.431 [2024-07-15 13:03:28.451217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.431 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.431 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:10.431 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:10.431 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:10.431 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.689 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.689 13:03:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:11.258 nvme0n1 00:24:11.258 13:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:11.258 13:03:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:11.258 Running I/O for 2 seconds... 00:24:13.791 00:24:13.791 Latency(us) 00:24:13.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.791 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:13.791 nvme0n1 : 2.01 23141.38 90.40 0.00 0.00 5519.75 2172.40 13495.56 00:24:13.791 =================================================================================================================== 00:24:13.791 Total : 23141.38 90.40 0.00 0.00 5519.75 2172.40 13495.56 00:24:13.791 0 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:13.791 | select(.opcode=="crc32c") 00:24:13.791 | "\(.module_name) \(.executed)"' 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3489587 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3489587 ']' 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3489587 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3489587 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3489587' 00:24:13.791 killing process with pid 3489587 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3489587 00:24:13.791 Received shutdown signal, test time was about 2.000000 seconds 00:24:13.791 00:24:13.791 Latency(us) 00:24:13.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.791 =================================================================================================================== 00:24:13.791 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3489587 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3490114 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3490114 /var/tmp/bperf.sock 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3490114 ']' 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.791 13:03:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:14.047 [2024-07-15 13:03:32.017416] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:14.047 [2024-07-15 13:03:32.017506] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490114 ] 00:24:14.047 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.047 Zero copy mechanism will not be used. 00:24:14.047 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.047 [2024-07-15 13:03:32.075674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.047 [2024-07-15 13:03:32.183451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.047 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.047 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:14.047 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:14.047 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:14.047 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:14.613 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.613 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.871 nvme0n1 00:24:14.871 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:14.872 13:03:32 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:14.872 Zero copy mechanism will not be used. 00:24:14.872 Running I/O for 2 seconds... 00:24:16.778 00:24:16.778 Latency(us) 00:24:16.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.778 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:16.778 nvme0n1 : 2.00 4803.10 600.39 0.00 0.00 3324.11 1699.08 5388.52 00:24:16.778 =================================================================================================================== 00:24:16.778 Total : 4803.10 600.39 0.00 0.00 3324.11 1699.08 5388.52 00:24:16.778 0 00:24:17.035 13:03:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:17.035 13:03:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:17.035 13:03:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:17.036 13:03:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:17.036 | select(.opcode=="crc32c") 00:24:17.036 | "\(.module_name) \(.executed)"' 00:24:17.036 13:03:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3490114 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3490114 ']' 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3490114 00:24:17.036 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3490114 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3490114' 00:24:17.295 killing process with pid 3490114 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3490114 00:24:17.295 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.295 00:24:17.295 Latency(us) 00:24:17.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.295 =================================================================================================================== 00:24:17.295 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.295 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3490114 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3488613 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3488613 ']' 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3488613 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3488613 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3488613' 00:24:17.555 killing process with pid 3488613 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3488613 00:24:17.555 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3488613 00:24:17.814 00:24:17.814 real 0m16.116s 00:24:17.814 user 0m30.496s 00:24:17.814 sys 0m5.201s 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:17.814 ************************************ 00:24:17.814 END TEST nvmf_digest_clean 00:24:17.814 ************************************ 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.814 ************************************ 00:24:17.814 START TEST nvmf_digest_error 00:24:17.814 ************************************ 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3490549 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3490549 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3490549 ']' 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.814 13:03:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.814 [2024-07-15 13:03:35.942041] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:17.814 [2024-07-15 13:03:35.942154] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.814 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.814 [2024-07-15 13:03:36.005705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.074 [2024-07-15 13:03:36.113733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.074 [2024-07-15 13:03:36.113808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.074 [2024-07-15 13:03:36.113833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:18.074 [2024-07-15 13:03:36.113845] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:18.074 [2024-07-15 13:03:36.113855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.074 [2024-07-15 13:03:36.113881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.074 [2024-07-15 13:03:36.174402] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.074 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.334 null0 00:24:18.334 [2024-07-15 13:03:36.288810] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.334 [2024-07-15 13:03:36.313040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3490575 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3490575 /var/tmp/bperf.sock 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3490575 ']' 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.334 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.334 [2024-07-15 13:03:36.357549] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:18.334 [2024-07-15 13:03:36.357631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490575 ] 00:24:18.334 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.334 [2024-07-15 13:03:36.416441] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.334 [2024-07-15 13:03:36.522654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.592 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:18.592 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:18.592 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.592 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.850 13:03:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:19.420 nvme0n1 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:19.420 13:03:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:19.420 Running I/O for 2 seconds... 00:24:19.420 [2024-07-15 13:03:37.579576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.420 [2024-07-15 13:03:37.579628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.420 [2024-07-15 13:03:37.579648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.420 [2024-07-15 13:03:37.591612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.420 [2024-07-15 13:03:37.591644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.420 [2024-07-15 13:03:37.591662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.420 [2024-07-15 13:03:37.604996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.420 [2024-07-15 13:03:37.605026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.420 [2024-07-15 13:03:37.605058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.420 [2024-07-15 13:03:37.620471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.420 [2024-07-15 13:03:37.620500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.420 [2024-07-15 13:03:37.620517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.632077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.632108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.632125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.646642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.646671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.646689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.661197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.661226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.661243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.670916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.670958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.670976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.683235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.683263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.683280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.697971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.698001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.698018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.711706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.711760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.711778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.721893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.721923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.721940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.737436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.737465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.751333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.751362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.751379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.762948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.762979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.762997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.776977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.777006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.777040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.789378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.789407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.789424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.799429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.799459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.799476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.812126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.812155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.812172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.824869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.824899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.824917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.681 [2024-07-15 13:03:37.835520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.681 [2024-07-15 13:03:37.835549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.681 [2024-07-15 13:03:37.835566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.682 [2024-07-15 13:03:37.848617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.682 [2024-07-15 13:03:37.848647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.682 [2024-07-15 13:03:37.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.682 [2024-07-15 13:03:37.862352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.682 [2024-07-15 13:03:37.862381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.682 [2024-07-15 13:03:37.862400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.682 [2024-07-15 13:03:37.874018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.682 [2024-07-15 13:03:37.874050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.682 [2024-07-15 13:03:37.874068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.888782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.888814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.888837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.899801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.899831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.899848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.911687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.911732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.911760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.924195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.924224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.924241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.935203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.935232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.935249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.947436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.947466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.947483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.960520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.960550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.960567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.972709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.972745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.972779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.982926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.982956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.982974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:37.998707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:37.998758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:37.998777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:38.014294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:38.014324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:38.014340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.943 [2024-07-15 13:03:38.027092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.943 [2024-07-15 13:03:38.027136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.943 [2024-07-15 13:03:38.027153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.037485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.037515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.037531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.050605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.050633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.050650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.064292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.064320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.064338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.077606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.077635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.088120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.088150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.088166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.104133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.104162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.104185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.114613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.114642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.114659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.129606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.129634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.129651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.944 [2024-07-15 13:03:38.141940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:19.944 [2024-07-15 13:03:38.141970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.944 [2024-07-15 13:03:38.141987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.152394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.152424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.164341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.164369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.164385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.175154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.175181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.175198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.188078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.188106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.188122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.198232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.198259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.198276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.210001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.210048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.210066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.221319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.221348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.221365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.233083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.233112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.233129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.245946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.245975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.245992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.256867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.256895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.256912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.269161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.269189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.269205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.280450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.280477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.280494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.291557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.291584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.291601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.302783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.302811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.302829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.313481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.313509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.313526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.328659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.328688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.328705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.338683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.338711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.338750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.351528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.351556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.351573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.366390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.366418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.366435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.380151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.380181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.380197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.390506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.390533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.390549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.205 [2024-07-15 13:03:38.405801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.205 [2024-07-15 13:03:38.405830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.205 [2024-07-15 13:03:38.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.418804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.418834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.418857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.430222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.430252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.440363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.440393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.440409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.452014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.452043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.452074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.463453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.463481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.463497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.474529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.474557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.474574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.487220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.487248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.487264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.497265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.497293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.497310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.511370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.511399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.511415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.525253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.525282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.525298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.536109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.536137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.536154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.464 [2024-07-15 13:03:38.548715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.464 [2024-07-15 13:03:38.548762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.464 [2024-07-15 13:03:38.548781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.562210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.562238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.562254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.571831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.571860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.571878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.585462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.585490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.585507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.599494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.599522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.599539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.609092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.609120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.609136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.622286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.622313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.622334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.635781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.635814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.646348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.646376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.646393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.465 [2024-07-15 13:03:38.658475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.465 [2024-07-15 13:03:38.658503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.465 [2024-07-15 13:03:38.658519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.672174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.672203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.672220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.683097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.683137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.683154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.693812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.693842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.693859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.704520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.704548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.704565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.717966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.718003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.718020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.732336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.732380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.732397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.745005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.745060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.745077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.754919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.754948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.754965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.766265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.766293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.766309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.779354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.779382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.779398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.791207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.791235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.791251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.801474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.801502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.801519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.816103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.816132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.816148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.829168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.829196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.829213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.839613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.839641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.839657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.854809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.854837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.854855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.864898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.864926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.864943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.877928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.877956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.877973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.889686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.889713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.889730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.899890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.899920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.899937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.914108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.914136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:10811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.914153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.725 [2024-07-15 13:03:38.927061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.725 [2024-07-15 13:03:38.927091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.725 [2024-07-15 13:03:38.927124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:38.938399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:38.938429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:38.938450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:38.952457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:38.952486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:38.952503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:38.967313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:38.967341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:38.967358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:38.978034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:38.978077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:38.978093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:38.991193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:38.991222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:38.991239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.001446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.001473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.001489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.015568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.015596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.015612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.030243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.030271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.030287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.042317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.042346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.042363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.052180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.052212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.052229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.064972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.065001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.065032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.074921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.074950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.074968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.087130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.087158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.087174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.101168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.101196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.101212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.110939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.110969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.110986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.125422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.125465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.125482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.135478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.135507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.135524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.149193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.149222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.985 [2024-07-15 13:03:39.149239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.985 [2024-07-15 13:03:39.164429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.985 [2024-07-15 13:03:39.164459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.986 [2024-07-15 13:03:39.164475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.986 [2024-07-15 13:03:39.176243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.986 [2024-07-15 13:03:39.176272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.986 [2024-07-15 13:03:39.176289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.986 [2024-07-15 13:03:39.189099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:20.986 [2024-07-15 13:03:39.189129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.986 [2024-07-15 13:03:39.189146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.243 [2024-07-15 13:03:39.199879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.199910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.199927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.211604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.211633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.211650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.222885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.222915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.222933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.235906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.235935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.235953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.246206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.246246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.246263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.258848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.258888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.258907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.270144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.270172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.270189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.281976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.282006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.282023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.294353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.294381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.294398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.309770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.309819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.320843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.320873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.320891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.332132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.332161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.332178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.344929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.344958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.344975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.356023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.356052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.356069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.366820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.366850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.366867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.381298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.381327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.381344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.391762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.391790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.391807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.405313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.405344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.405361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.420693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.420749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.420770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.431992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.432022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.432040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.244 [2024-07-15 13:03:39.445684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.244 [2024-07-15 13:03:39.445713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-07-15 13:03:39.445730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.457103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.457133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.457150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.471218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.471247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.471269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.486551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.486582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.486599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.501818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.501848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.515901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.515930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.515947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.526215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.526244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.526261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.541798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.541829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.541846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.551548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.551577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.551594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 [2024-07-15 13:03:39.565988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b34280) 00:24:21.502 [2024-07-15 13:03:39.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.502 [2024-07-15 13:03:39.566036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.502 00:24:21.502 Latency(us) 00:24:21.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.502 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:21.502 nvme0n1 : 2.01 20472.16 79.97 0.00 0.00 6244.58 3034.07 17864.63 00:24:21.502 =================================================================================================================== 00:24:21.502 Total : 20472.16 79.97 0.00 0.00 6244.58 3034.07 17864.63 00:24:21.502 0 00:24:21.502 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:21.502 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:21.502 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:21.502 | .driver_specific 00:24:21.502 | .nvme_error 00:24:21.502 | .status_code 00:24:21.502 | .command_transient_transport_error' 00:24:21.503 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3490575 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3490575 ']' 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3490575 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3490575 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3490575' 00:24:21.763 killing process with pid 3490575 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3490575 00:24:21.763 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.763 00:24:21.763 Latency(us) 00:24:21.763 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.763 =================================================================================================================== 00:24:21.763 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.763 13:03:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3490575 00:24:22.021 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:22.021 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:22.021 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3491101 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3491101 /var/tmp/bperf.sock 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3491101 ']' 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:22.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.022 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.022 [2024-07-15 13:03:40.194644] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:22.022 [2024-07-15 13:03:40.194735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491101 ] 00:24:22.022 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.022 Zero copy mechanism will not be used. 00:24:22.022 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.279 [2024-07-15 13:03:40.254191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.279 [2024-07-15 13:03:40.358052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.279 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.279 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:22.279 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:22.279 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.537 13:03:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:23.104 nvme0n1 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:23.104 13:03:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:23.364 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:23.364 Zero copy mechanism will not be used. 00:24:23.364 Running I/O for 2 seconds... 00:24:23.364 [2024-07-15 13:03:41.359450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.359513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.359533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.369141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.369172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.369188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.378970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.379002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.379018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.388608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.388637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.388654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.398601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.398631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.398647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.408436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.364 [2024-07-15 13:03:41.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.364 [2024-07-15 13:03:41.408482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.364 [2024-07-15 13:03:41.418299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.418328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.418344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.428354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.428383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.428399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.438256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.438284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.438300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.447823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.447855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.447872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.457622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.457651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.457675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.466898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.466928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.466944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.474176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.474204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.474221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.480529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.480557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.486866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.486894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.486910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.493268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.493295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.493310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.499483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.499510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.499526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.505913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.505942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.505959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.512372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.512399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.518783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.518818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.518836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.526284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.526312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.526328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.535412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.535439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.535455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.544909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.544938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.544954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.554597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.554624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.554640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.365 [2024-07-15 13:03:41.564214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.365 [2024-07-15 13:03:41.564243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.365 [2024-07-15 13:03:41.564274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.626 [2024-07-15 13:03:41.574088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.626 [2024-07-15 13:03:41.574133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.626 [2024-07-15 13:03:41.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.626 [2024-07-15 13:03:41.583980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.626 [2024-07-15 13:03:41.584009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.626 [2024-07-15 13:03:41.584026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.626 [2024-07-15 13:03:41.593642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.626 [2024-07-15 13:03:41.593669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.593685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.603246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.603274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.603291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.612928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.612957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.612974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.622691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.622734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.622759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.632242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.632270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.632286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.641515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.641542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.641557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.650912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.650942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.660506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.660533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.660549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.670207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.670235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.670251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.679958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.679988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.680011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.687909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.687939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.687955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.694366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.694394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.694410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.700531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.700558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.700574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.706497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.706525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.706542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.712377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.712404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.712421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.718297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.718325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.718340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.723490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.723518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.723535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.729662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.729691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.729707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.736782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.736814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.736832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.745874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.745906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.627 [2024-07-15 13:03:41.745923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.627 [2024-07-15 13:03:41.755617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.627 [2024-07-15 13:03:41.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.755663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.762789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.762819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.762837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.770388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.770416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.770432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.777123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.777153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.777170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.784854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.784885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.784902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.792417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.792447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.792463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.801273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.801303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.801326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.809027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.809073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.809091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.817111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.817141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.817157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.628 [2024-07-15 13:03:41.825109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.628 [2024-07-15 13:03:41.825137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.628 [2024-07-15 13:03:41.825153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.833406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.833437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.833454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.842169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.842197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.842214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.850697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.850727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.850774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.859069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.859098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.867883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.867913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.867931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.877414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.877453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.877470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.887172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.887202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.887219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.896190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.896220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.896236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.905404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.905433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.905449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.915273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.915302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.915319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.925529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.925558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.925575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.935059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.935088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.935104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.944584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.944613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.944629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.953824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.953854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.953871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.964115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.964146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.964162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.974836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.974868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.974885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.985034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.985065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.985081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.993566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.993607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.993624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:41.997559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:41.997587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:41.997603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.004745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.004774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.004807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.011773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.011803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.011820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.018247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.018276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.018293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.024941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.024973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.024996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.031673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.031703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.031719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.040978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.041008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.041025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.049931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.049961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.049978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.057921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.057951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.057969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.888 [2024-07-15 13:03:42.065560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.888 [2024-07-15 13:03:42.065588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.888 [2024-07-15 13:03:42.065604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.889 [2024-07-15 13:03:42.073663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.889 [2024-07-15 13:03:42.073691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.889 [2024-07-15 13:03:42.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.889 [2024-07-15 13:03:42.081848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.889 [2024-07-15 13:03:42.081879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.889 [2024-07-15 13:03:42.081897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.889 [2024-07-15 13:03:42.089955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:23.889 [2024-07-15 13:03:42.089986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.889 [2024-07-15 13:03:42.090003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.098897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.098933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.098950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.107536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.107565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.107581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.115597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.115626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.115641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.123979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.124010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.124046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.132527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.132556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.132572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.141502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.141531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.150476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.150505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.150520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.159720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.159770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.159789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.169329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.169358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.169373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.177521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.177549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.149 [2024-07-15 13:03:42.177565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.149 [2024-07-15 13:03:42.186361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.149 [2024-07-15 13:03:42.186390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.186405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.195482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.195510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.195526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.204903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.204933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.204949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.212171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.212198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.212213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.220656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.220684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.220699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.229693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.229735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.229760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.238790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.238818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.238834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.248363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.248390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.248411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.258467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.258495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.258511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.268325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.268353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.268369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.278304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.278332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.278348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.288284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.288311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.288326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.298272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.298299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.298315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.308033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.308091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.317983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.318012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.318028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.327713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.327765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.327782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.337453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.337485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.337500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.150 [2024-07-15 13:03:42.347218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.150 [2024-07-15 13:03:42.347246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.150 [2024-07-15 13:03:42.347262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.357242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.357270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.357286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.367053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.367083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.367113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.377621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.377648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.377664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.387847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.387876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.387892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.397585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.397612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.397628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.407240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.407268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.407284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.416918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.416947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.416963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.427473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.427501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.427517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.437248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.437276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.437291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.446971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.446999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.447028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.456661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.456688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.456703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.466701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.466729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.411 [2024-07-15 13:03:42.466765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.411 [2024-07-15 13:03:42.476660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.411 [2024-07-15 13:03:42.476687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.476702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.484383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.484410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.484425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.491013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.491055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.491071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.498185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.498211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.498231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.506524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.506551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.506567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.515270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.515298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.515313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.524178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.524209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.524224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.532998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.533027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.533058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.540331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.540358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.540373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.548990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.549043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.549059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.557874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.557903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.557920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.566508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.566535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.566550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.574961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.574993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.575010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.584203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.584230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.584246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.593446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.593473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.593488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.602179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.602205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.602220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.412 [2024-07-15 13:03:42.611142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.412 [2024-07-15 13:03:42.611170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.412 [2024-07-15 13:03:42.611186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.671 [2024-07-15 13:03:42.620443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.671 [2024-07-15 13:03:42.620471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.671 [2024-07-15 13:03:42.620486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.671 [2024-07-15 13:03:42.629397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.671 [2024-07-15 13:03:42.629426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.671 [2024-07-15 13:03:42.629441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.671 [2024-07-15 13:03:42.639181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.671 [2024-07-15 13:03:42.639208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.639224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.648660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.648687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.648702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.658336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.658364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.658379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.668034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.668062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.668077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.677988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.678018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.678035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.687926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.687954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.687969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.697574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.697603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.697618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.707466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.707494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.707509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.717265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.717292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.717308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.727020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.727062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.727077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.736730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.736778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.736799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.746598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.746625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.746640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.756255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.756282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.756297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.766123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.766151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.766167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.775891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.775918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.775934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.785512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.785538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.785553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.795120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.795147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.795162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.805060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.805087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.805103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.814622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.814650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.814665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.824245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.824273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.824288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.833415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.833443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.833459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.839929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.839957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.839973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.846166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.846192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.846208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.852238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.852265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.852280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.857938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.857965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.857980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.863955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.863983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.864000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.869928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.869954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.672 [2024-07-15 13:03:42.876441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.672 [2024-07-15 13:03:42.876468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.672 [2024-07-15 13:03:42.876489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.883402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.883431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.883447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.891978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.892008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.892026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.899244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.899271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.899286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.905212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.905239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.911105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.911131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.911146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.917843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.917871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.917886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.924305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.924332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.924347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.930264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.930291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.930306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.936261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.936291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.936307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.942193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.942220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.942235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.948258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.948285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.948300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.954508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.954535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.961602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.933 [2024-07-15 13:03:42.961629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.933 [2024-07-15 13:03:42.961645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.933 [2024-07-15 13:03:42.968646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:42.968673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:42.968688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:42.976061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:42.976089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:42.976104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:42.983132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:42.983158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:42.983173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:42.991071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:42.991099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:42.991115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:42.999710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:42.999758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:42.999775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.007964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.007992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.008008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.017163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.017191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.017206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.026417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.026445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.026460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.035808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.035837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.035854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.044764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.044807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.044825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.052020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.052078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.059361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.059390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.059405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.066985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.067044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.067067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.073893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.073923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.073940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.080099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.080126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.080142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.087208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.087236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.087252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.094904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.094933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.094950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.102629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.102658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.102673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.111192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.111220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.111235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.119777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.119805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.119821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.128873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.128901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.128918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:24.934 [2024-07-15 13:03:43.138219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:24.934 [2024-07-15 13:03:43.138256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:24.934 [2024-07-15 13:03:43.138274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.145951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.145998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.153365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.153393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.153409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.160124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.160169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.167047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.167087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.167103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.173594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.173623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.173639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.180344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.180373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.180389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.186979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.187023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.187040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.193348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.193375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.193391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.199574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.199602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.199618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.206166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.206195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.206212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.193 [2024-07-15 13:03:43.212545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.193 [2024-07-15 13:03:43.212572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.193 [2024-07-15 13:03:43.212588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.218860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.218889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.218906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.225293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.225321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.231482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.231510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.231526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.237834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.237861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.237877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.244267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.244295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.244311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.250484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.250511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.250533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.256768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.256796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.256813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.263355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.263383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.263399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.269903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.269932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.269949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.276475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.276502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.276518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.282829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.282857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.282873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.289112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.289140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.295545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.295572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.295588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.302016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.302046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.302079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.308731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.308774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.308791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.315811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.315839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.315855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.322474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.322501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.322518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.329299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.329326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.329342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.335976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.336005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.336022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.343082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.343109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.343125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.194 [2024-07-15 13:03:43.350335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x95ed60) 00:24:25.194 [2024-07-15 13:03:43.350364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.194 [2024-07-15 13:03:43.350380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.194 00:24:25.194 Latency(us) 00:24:25.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:25.194 nvme0n1 : 2.00 3753.43 469.18 0.00 0.00 4258.29 825.27 11311.03 00:24:25.194 =================================================================================================================== 00:24:25.194 Total : 3753.43 469.18 0.00 0.00 4258.29 825.27 11311.03 00:24:25.194 0 00:24:25.194 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:25.194 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:25.194 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:25.194 | .driver_specific 00:24:25.194 | .nvme_error 00:24:25.194 | .status_code 00:24:25.194 | .command_transient_transport_error' 00:24:25.194 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3491101 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3491101 ']' 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3491101 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491101 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491101' 00:24:25.453 killing process with pid 3491101 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3491101 00:24:25.453 Received shutdown signal, test time was about 2.000000 seconds 00:24:25.453 00:24:25.453 Latency(us) 00:24:25.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.453 =================================================================================================================== 00:24:25.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.453 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3491101 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3491513 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3491513 /var/tmp/bperf.sock 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3491513 ']' 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:26.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.022 13:03:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.022 [2024-07-15 13:03:43.963094] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:26.022 [2024-07-15 13:03:43.963184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491513 ] 00:24:26.022 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.022 [2024-07-15 13:03:44.022498] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.022 [2024-07-15 13:03:44.129880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.281 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.281 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:26.281 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.281 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.542 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.800 nvme0n1 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:26.800 13:03:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:27.059 Running I/O for 2 seconds... 00:24:27.059 [2024-07-15 13:03:45.107666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee5c8 00:24:27.059 [2024-07-15 13:03:45.108572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.108620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.118876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef6a8 00:24:27.059 [2024-07-15 13:03:45.119729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.119778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.130122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb480 00:24:27.059 [2024-07-15 13:03:45.131136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.131167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.140494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eb760 00:24:27.059 [2024-07-15 13:03:45.141486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.141523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.152285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e1710 00:24:27.059 [2024-07-15 13:03:45.153432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.153463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.162807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e7c50 00:24:27.059 [2024-07-15 13:03:45.163575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.163612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.174057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2d80 00:24:27.059 [2024-07-15 13:03:45.174630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.174656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.185514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e27f0 00:24:27.059 [2024-07-15 13:03:45.186256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.186290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.197206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0630 00:24:27.059 [2024-07-15 13:03:45.198099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.198125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.207616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f57b0 00:24:27.059 [2024-07-15 13:03:45.209219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.209247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.217407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f35f0 00:24:27.059 [2024-07-15 13:03:45.218214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.218243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.229527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fda78 00:24:27.059 [2024-07-15 13:03:45.230169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.230196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.242299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e84c0 00:24:27.059 [2024-07-15 13:03:45.243677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.243713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.253569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6cc8 00:24:27.059 [2024-07-15 13:03:45.255151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.059 [2024-07-15 13:03:45.255176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:27.059 [2024-07-15 13:03:45.265293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e1710 00:24:27.318 [2024-07-15 13:03:45.267175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.267202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.273207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e27f0 00:24:27.318 [2024-07-15 13:03:45.273979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.274015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.285476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f7100 00:24:27.318 [2024-07-15 13:03:45.286794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.286820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.296764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ea248 00:24:27.318 [2024-07-15 13:03:45.298175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.298200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.308068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef270 00:24:27.318 [2024-07-15 13:03:45.309637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.309662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.318022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f9b30 00:24:27.318 [2024-07-15 13:03:45.319197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.319232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.327810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ebfd0 00:24:27.318 [2024-07-15 13:03:45.329342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.329371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.337851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eee38 00:24:27.318 [2024-07-15 13:03:45.338615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.338650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.349099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee190 00:24:27.318 [2024-07-15 13:03:45.349896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.349922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.361372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2d80 00:24:27.318 [2024-07-15 13:03:45.362831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.362865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.372842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2510 00:24:27.318 [2024-07-15 13:03:45.374257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.374282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.383400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ed4e8 00:24:27.318 [2024-07-15 13:03:45.384429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.384465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.393601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f35f0 00:24:27.318 [2024-07-15 13:03:45.395308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.395334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.402916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eee38 00:24:27.318 [2024-07-15 13:03:45.403710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.403735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.414244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5658 00:24:27.318 [2024-07-15 13:03:45.415319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.415346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.426103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fac10 00:24:27.318 [2024-07-15 13:03:45.427370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.437962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190de038 00:24:27.318 [2024-07-15 13:03:45.439275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.318 [2024-07-15 13:03:45.439305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.318 [2024-07-15 13:03:45.449257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0ea0 00:24:27.319 [2024-07-15 13:03:45.450574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.450600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.460517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2948 00:24:27.319 [2024-07-15 13:03:45.461985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.462011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.471790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0630 00:24:27.319 [2024-07-15 13:03:45.473542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.473574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.483387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee190 00:24:27.319 [2024-07-15 13:03:45.485258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.485291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.491191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eb760 00:24:27.319 [2024-07-15 13:03:45.491979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.492006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.502795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fc998 00:24:27.319 [2024-07-15 13:03:45.503829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.503858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.319 [2024-07-15 13:03:45.514733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0ea0 00:24:27.319 [2024-07-15 13:03:45.515848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.319 [2024-07-15 13:03:45.515875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.526921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f5be8 00:24:27.577 [2024-07-15 13:03:45.528444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.528475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.537763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e8d30 00:24:27.577 [2024-07-15 13:03:45.539131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.539161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.549378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fda78 00:24:27.577 [2024-07-15 13:03:45.550777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.550803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.560386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e95a0 00:24:27.577 [2024-07-15 13:03:45.561807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.561834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.570914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190edd58 00:24:27.577 [2024-07-15 13:03:45.571949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.571981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.581155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eea00 00:24:27.577 [2024-07-15 13:03:45.582800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.582826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.590474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f5be8 00:24:27.577 [2024-07-15 13:03:45.591307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.591337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.601842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f7538 00:24:27.577 [2024-07-15 13:03:45.602779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.602808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.613867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef6a8 00:24:27.577 [2024-07-15 13:03:45.615012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.615055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.625361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5a90 00:24:27.577 [2024-07-15 13:03:45.626592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.626618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.635547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f96f8 00:24:27.577 [2024-07-15 13:03:45.636843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.636870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.646931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190dfdc0 00:24:27.577 [2024-07-15 13:03:45.648366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.648391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.658267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f31b8 00:24:27.577 [2024-07-15 13:03:45.659827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.659865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.669547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6458 00:24:27.577 [2024-07-15 13:03:45.671342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.671369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.680902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fac10 00:24:27.577 [2024-07-15 13:03:45.682755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.682790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.577 [2024-07-15 13:03:45.688572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f1430 00:24:27.577 [2024-07-15 13:03:45.689454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.577 [2024-07-15 13:03:45.689480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.698830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fe2e8 00:24:27.578 [2024-07-15 13:03:45.699619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.699645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.710162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f5378 00:24:27.578 [2024-07-15 13:03:45.711183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.721446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e99d8 00:24:27.578 [2024-07-15 13:03:45.722630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.722656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.733474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ecc78 00:24:27.578 [2024-07-15 13:03:45.734821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.744666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e99d8 00:24:27.578 [2024-07-15 13:03:45.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.746151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.754899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fac10 00:24:27.578 [2024-07-15 13:03:45.756276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.756301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.763463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f92c0 00:24:27.578 [2024-07-15 13:03:45.764332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.764358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.578 [2024-07-15 13:03:45.775445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e3498 00:24:27.578 [2024-07-15 13:03:45.776471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.578 [2024-07-15 13:03:45.776507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.787414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190de470 00:24:27.838 [2024-07-15 13:03:45.788565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.788596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.797510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e6300 00:24:27.838 [2024-07-15 13:03:45.798717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.808841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e3498 00:24:27.838 [2024-07-15 13:03:45.810142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.810168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.819921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5658 00:24:27.838 [2024-07-15 13:03:45.821154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.830357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e23b8 00:24:27.838 [2024-07-15 13:03:45.831243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.831268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.841648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190df118 00:24:27.838 [2024-07-15 13:03:45.842649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.852668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ed4e8 00:24:27.838 [2024-07-15 13:03:45.853984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.854016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.862784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eaab8 00:24:27.838 [2024-07-15 13:03:45.864426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.864452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.872298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e9168 00:24:27.838 [2024-07-15 13:03:45.873097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.873129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.883674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0a68 00:24:27.838 [2024-07-15 13:03:45.884617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.884643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.894714] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e7818 00:24:27.838 [2024-07-15 13:03:45.895662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.895701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.905873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee5c8 00:24:27.838 [2024-07-15 13:03:45.906899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.906935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.916063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eea00 00:24:27.838 [2024-07-15 13:03:45.916920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.916946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.927310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f81e0 00:24:27.838 [2024-07-15 13:03:45.928139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.928173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.938399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f20d8 00:24:27.838 [2024-07-15 13:03:45.939159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.939184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.950842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f9f68 00:24:27.838 [2024-07-15 13:03:45.952430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.952455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.962160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fef90 00:24:27.838 [2024-07-15 13:03:45.963775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.963826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.972397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190dece0 00:24:27.838 [2024-07-15 13:03:45.973603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.973639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.982270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2948 00:24:27.838 [2024-07-15 13:03:45.983854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.983879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:45.991570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f5be8 00:24:27.838 [2024-07-15 13:03:45.992425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:45.992450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:46.003599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fa7d8 00:24:27.838 [2024-07-15 13:03:46.004616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:46.004641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:46.014935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ecc78 00:24:27.838 [2024-07-15 13:03:46.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:46.016071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:46.025170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e88f8 00:24:27.838 [2024-07-15 13:03:46.026321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:46.026346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:27.838 [2024-07-15 13:03:46.037284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6cc8 00:24:27.838 [2024-07-15 13:03:46.038671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.838 [2024-07-15 13:03:46.038697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.049306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f4f40 00:24:28.099 [2024-07-15 13:03:46.050729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.050762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.059512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e6738 00:24:28.099 [2024-07-15 13:03:46.060764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.060791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.070788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f3a28 00:24:28.099 [2024-07-15 13:03:46.072077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.072104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.081619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6458 00:24:28.099 [2024-07-15 13:03:46.082917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.082945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.092513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb8b8 00:24:28.099 [2024-07-15 13:03:46.093773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.093800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.103342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eb760 00:24:28.099 [2024-07-15 13:03:46.104626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.104651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.114221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fd640 00:24:28.099 [2024-07-15 13:03:46.115440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.115465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.125431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f0350 00:24:28.099 [2024-07-15 13:03:46.126711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.137699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef270 00:24:28.099 [2024-07-15 13:03:46.139544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.145429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ec408 00:24:28.099 [2024-07-15 13:03:46.146286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.146311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.156431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eea00 00:24:28.099 [2024-07-15 13:03:46.157319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.157344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.167242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f92c0 00:24:28.099 [2024-07-15 13:03:46.168114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.168139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.177401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190efae0 00:24:28.099 [2024-07-15 13:03:46.178247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.178272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.188513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f0bc0 00:24:28.099 [2024-07-15 13:03:46.189395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.199761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eea00 00:24:28.099 [2024-07-15 13:03:46.200587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.200613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.210867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f92c0 00:24:28.099 [2024-07-15 13:03:46.211698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.211744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.223145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e6fa8 00:24:28.099 [2024-07-15 13:03:46.224551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.224576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.233526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f9f68 00:24:28.099 [2024-07-15 13:03:46.234625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.234651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.243442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2948 00:24:28.099 [2024-07-15 13:03:46.244468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.244493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.254558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f46d0 00:24:28.099 [2024-07-15 13:03:46.255499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.255524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.265900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e4578 00:24:28.099 [2024-07-15 13:03:46.266987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.267031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.099 [2024-07-15 13:03:46.276580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f1430 00:24:28.099 [2024-07-15 13:03:46.277860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.099 [2024-07-15 13:03:46.277890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.100 [2024-07-15 13:03:46.287877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eb760 00:24:28.100 [2024-07-15 13:03:46.289262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-07-15 13:03:46.289288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:28.100 [2024-07-15 13:03:46.298275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f3a28 00:24:28.100 [2024-07-15 13:03:46.299305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.100 [2024-07-15 13:03:46.299332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.311583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190de038 00:24:28.383 [2024-07-15 13:03:46.313241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.313268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.322432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190edd58 00:24:28.383 [2024-07-15 13:03:46.323634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.323661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.335183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb480 00:24:28.383 [2024-07-15 13:03:46.336881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.336907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.346769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f7100 00:24:28.383 [2024-07-15 13:03:46.348564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.348589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.354478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f81e0 00:24:28.383 [2024-07-15 13:03:46.355264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.355289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.366151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f92c0 00:24:28.383 [2024-07-15 13:03:46.367066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.367092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.378188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0a68 00:24:28.383 [2024-07-15 13:03:46.379402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.379429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.388696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fe720 00:24:28.383 [2024-07-15 13:03:46.389704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.389751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:28.383 [2024-07-15 13:03:46.400203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e6b70 00:24:28.383 [2024-07-15 13:03:46.401190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.383 [2024-07-15 13:03:46.401215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.412461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e99d8 00:24:28.384 [2024-07-15 13:03:46.414050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.414086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.422612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f0788 00:24:28.384 [2024-07-15 13:03:46.423765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.423791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.433666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e95a0 00:24:28.384 [2024-07-15 13:03:46.434698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.434746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.444938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb048 00:24:28.384 [2024-07-15 13:03:46.446268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.446293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.456222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190edd58 00:24:28.384 [2024-07-15 13:03:46.457619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.457645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.466464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f5be8 00:24:28.384 [2024-07-15 13:03:46.467863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.467900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.477576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fc128 00:24:28.384 [2024-07-15 13:03:46.478981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.479008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.488778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eee38 00:24:28.384 [2024-07-15 13:03:46.490216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.490242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.497649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f92c0 00:24:28.384 [2024-07-15 13:03:46.498520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.498546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.508903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ed0b0 00:24:28.384 [2024-07-15 13:03:46.509904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.509930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.519178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f4b08 00:24:28.384 [2024-07-15 13:03:46.520095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.520120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.530736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6cc8 00:24:28.384 [2024-07-15 13:03:46.531875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.531905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.542545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef270 00:24:28.384 [2024-07-15 13:03:46.543835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.543863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.554754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee5c8 00:24:28.384 [2024-07-15 13:03:46.556229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:6274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.556257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.384 [2024-07-15 13:03:46.566751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f4b08 00:24:28.384 [2024-07-15 13:03:46.568420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.384 [2024-07-15 13:03:46.568451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.579226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ddc00 00:24:28.649 [2024-07-15 13:03:46.580864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.580890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.590898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ec840 00:24:28.649 [2024-07-15 13:03:46.592818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.592846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.599480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f8618 00:24:28.649 [2024-07-15 13:03:46.600429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.611255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f8a50 00:24:28.649 [2024-07-15 13:03:46.612278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.612303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.623065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb480 00:24:28.649 [2024-07-15 13:03:46.624197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.624222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.635113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e8d30 00:24:28.649 [2024-07-15 13:03:46.636347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.636372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.646780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fdeb0 00:24:28.649 [2024-07-15 13:03:46.648165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.649 [2024-07-15 13:03:46.648199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:28.649 [2024-07-15 13:03:46.658478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f8a50 00:24:28.650 [2024-07-15 13:03:46.660062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.660087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.668654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fc998 00:24:28.650 [2024-07-15 13:03:46.669811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.669837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.679930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e3060 00:24:28.650 [2024-07-15 13:03:46.680848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.680873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.690981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5a90 00:24:28.650 [2024-07-15 13:03:46.692313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.692337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.701919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f1430 00:24:28.650 [2024-07-15 13:03:46.703174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.703198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.712783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e6fa8 00:24:28.650 [2024-07-15 13:03:46.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.714066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.723590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fc128 00:24:28.650 [2024-07-15 13:03:46.724941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.724965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.734502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e1b48 00:24:28.650 [2024-07-15 13:03:46.735780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.735806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.747071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fdeb0 00:24:28.650 [2024-07-15 13:03:46.749023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.749049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.754889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e3060 00:24:28.650 [2024-07-15 13:03:46.755734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:12237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.755764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.765213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ee5c8 00:24:28.650 [2024-07-15 13:03:46.766079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.766103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.777356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f0788 00:24:28.650 [2024-07-15 13:03:46.778379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.778403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.788248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb8b8 00:24:28.650 [2024-07-15 13:03:46.789300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.789325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.799567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f96f8 00:24:28.650 [2024-07-15 13:03:46.800720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.800751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.809744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f9b30 00:24:28.650 [2024-07-15 13:03:46.810695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.810718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.820980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5658 00:24:28.650 [2024-07-15 13:03:46.821975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.822000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.833264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e88f8 00:24:28.650 [2024-07-15 13:03:46.834821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.834846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:28.650 [2024-07-15 13:03:46.843455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190fb8b8 00:24:28.650 [2024-07-15 13:03:46.844632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.650 [2024-07-15 13:03:46.844655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.855178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e9e10 00:24:28.910 [2024-07-15 13:03:46.856288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.856318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.868136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f2d80 00:24:28.910 [2024-07-15 13:03:46.869987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.870012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.875857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e84c0 00:24:28.910 [2024-07-15 13:03:46.876694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.876717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.887269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e3060 00:24:28.910 [2024-07-15 13:03:46.888215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.888239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.897528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e1b48 00:24:28.910 [2024-07-15 13:03:46.898379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.898402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.909640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e27f0 00:24:28.910 [2024-07-15 13:03:46.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.910701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.921937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f6890 00:24:28.910 [2024-07-15 13:03:46.923468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.923503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.933153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e0ea0 00:24:28.910 [2024-07-15 13:03:46.934716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.934769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.942315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190dece0 00:24:28.910 [2024-07-15 13:03:46.943104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.943129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.954844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190f0ff8 00:24:28.910 [2024-07-15 13:03:46.956409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.956433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.965007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190dece0 00:24:28.910 [2024-07-15 13:03:46.966181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.966205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.976148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e8d30 00:24:28.910 [2024-07-15 13:03:46.977288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.977313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.987340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e23b8 00:24:28.910 [2024-07-15 13:03:46.988644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.988669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:46.998601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e38d0 00:24:28.910 [2024-07-15 13:03:46.999957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:46.999983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:47.008944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e5658 00:24:28.910 [2024-07-15 13:03:47.010259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:47.010285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:47.020559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ef270 00:24:28.910 [2024-07-15 13:03:47.021841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:47.021876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:47.032161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eaab8 00:24:28.910 [2024-07-15 13:03:47.033591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:47.033616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:47.042747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190eaef0 00:24:28.910 [2024-07-15 13:03:47.044004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.910 [2024-07-15 13:03:47.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:28.910 [2024-07-15 13:03:47.053670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ea248 00:24:28.911 [2024-07-15 13:03:47.054970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.911 [2024-07-15 13:03:47.054996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:28.911 [2024-07-15 13:03:47.065486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ed4e8 00:24:28.911 [2024-07-15 13:03:47.066907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.911 [2024-07-15 13:03:47.066933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:28.911 [2024-07-15 13:03:47.075906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e7c50 00:24:28.911 [2024-07-15 13:03:47.076880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.911 [2024-07-15 13:03:47.076905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:28.911 [2024-07-15 13:03:47.087353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190ddc00 00:24:28.911 [2024-07-15 13:03:47.088217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.911 [2024-07-15 13:03:47.088242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.911 [2024-07-15 13:03:47.100350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x87b0d0) with pdu=0x2000190e01f8 00:24:28.911 [2024-07-15 13:03:47.102054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.911 [2024-07-15 13:03:47.102079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:28.911 00:24:28.911 Latency(us) 00:24:28.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.911 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.911 nvme0n1 : 2.00 23209.68 90.66 0.00 0.00 5509.26 2172.40 13592.65 00:24:28.911 =================================================================================================================== 00:24:28.911 Total : 23209.68 90.66 0.00 0.00 5509.26 2172.40 13592.65 00:24:28.911 0 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:29.169 | .driver_specific 00:24:29.169 | .nvme_error 00:24:29.169 | .status_code 00:24:29.169 | .command_transient_transport_error' 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3491513 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3491513 ']' 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3491513 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.169 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491513 00:24:29.428 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:29.428 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:29.428 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491513' 00:24:29.428 killing process with pid 3491513 00:24:29.428 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3491513 00:24:29.428 Received shutdown signal, test time was about 2.000000 seconds 00:24:29.428 00:24:29.428 Latency(us) 00:24:29.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.428 =================================================================================================================== 00:24:29.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.428 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3491513 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3491931 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3491931 /var/tmp/bperf.sock 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3491931 ']' 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.687 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:29.687 [2024-07-15 13:03:47.704696] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:29.687 [2024-07-15 13:03:47.704806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491931 ] 00:24:29.687 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.687 Zero copy mechanism will not be used. 00:24:29.687 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.687 [2024-07-15 13:03:47.769244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.687 [2024-07-15 13:03:47.880971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.944 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.944 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:29.944 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:29.944 13:03:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.202 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.769 nvme0n1 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:30.769 13:03:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.769 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.769 Zero copy mechanism will not be used. 00:24:30.769 Running I/O for 2 seconds... 00:24:30.769 [2024-07-15 13:03:48.968501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:30.769 [2024-07-15 13:03:48.968821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.769 [2024-07-15 13:03:48.968858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.029 [2024-07-15 13:03:48.975958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.029 [2024-07-15 13:03:48.976263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:48.976306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:48.984938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:48.985216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:48.985244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:48.992509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:48.992838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:48.992865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:48.999590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:48.999891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:48.999926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.006200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.006518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.006544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.013214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.013511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.013538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.019707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.020058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.020085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.025399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.025668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.025694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.030951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.031253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.031281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.036602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.036958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.036994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.042638] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.042938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.042967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.048545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.048843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.048870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.054960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.055260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.055286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.062661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.063061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.063088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.069689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.070043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.070071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.076287] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.076554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.076580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.082363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.082631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.082657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.088613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.088907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.088935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.094948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.095272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.100921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.101205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.101231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.107081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.107359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.112774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.113037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.113078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.118990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.119275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.119301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.126248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.126500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.126525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.134775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.135168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.143625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.143998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.144038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.152550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.030 [2024-07-15 13:03:49.152915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.030 [2024-07-15 13:03:49.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.030 [2024-07-15 13:03:49.162054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.162380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.162412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.171277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.171661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.180892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.181255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.181288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.190531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.190867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.190897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.199633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.199992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.200020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.209598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.209912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.209939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.219545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.219925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.219953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.031 [2024-07-15 13:03:49.229186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.031 [2024-07-15 13:03:49.229508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.031 [2024-07-15 13:03:49.229537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.290 [2024-07-15 13:03:49.239053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.290 [2024-07-15 13:03:49.239415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.290 [2024-07-15 13:03:49.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.290 [2024-07-15 13:03:49.248266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.248637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.248664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.256540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.256823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.256851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.265163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.265479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.265505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.273196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.273498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.273524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.280975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.281336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.281362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.290242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.290500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.290527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.298427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.298708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.298756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.306425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.306708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.306734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.315536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.315937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.315965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.325290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.325609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.325636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.334941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.335290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.335316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.344672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.354790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.355159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.364352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.364712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.364743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.373986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.374323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.374350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.383974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.384302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.384330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.393608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.393986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.394014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.403214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.403560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.403586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.412475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.412902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.412930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.421250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.421509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.421543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.430126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.430478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.430505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.438855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.439216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.439242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.448389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.448781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.448809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.458273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.458538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.458565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.467512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.467925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.467953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.477188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.477556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.477583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.291 [2024-07-15 13:03:49.487908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.291 [2024-07-15 13:03:49.488232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.291 [2024-07-15 13:03:49.488259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.550 [2024-07-15 13:03:49.496980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.550 [2024-07-15 13:03:49.497284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.550 [2024-07-15 13:03:49.497311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.550 [2024-07-15 13:03:49.505068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.550 [2024-07-15 13:03:49.505354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.550 [2024-07-15 13:03:49.505380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.550 [2024-07-15 13:03:49.512569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.550 [2024-07-15 13:03:49.512863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.550 [2024-07-15 13:03:49.512891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.550 [2024-07-15 13:03:49.520804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.550 [2024-07-15 13:03:49.521086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.550 [2024-07-15 13:03:49.521113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.550 [2024-07-15 13:03:49.528422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.550 [2024-07-15 13:03:49.528685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.528711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.536017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.536320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.536345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.543536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.543823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.543851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.551438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.551755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.551782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.560016] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.560295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.560322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.566260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.566516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.566542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.572093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.572373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.572398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.578216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.578489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.578515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.584333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.584670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.584696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.590510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.590788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.590816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.596541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.596821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.596848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.602138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.602469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.602495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.608165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.608418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.614677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.615071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.615097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.622393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.622653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.622695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.628620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.628898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.628925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.634957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.635226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.635252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.641157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.641409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.641436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.647267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.647517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.647543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.653672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.653965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.653993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.660018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.660310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.660335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.666436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.666686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.666711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.673000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.673281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.673308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.679117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.679370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.685284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.685533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.685558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.691329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.691579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.691604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.697404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.697689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.697714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.705105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.705428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.705453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.713951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.714319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.551 [2024-07-15 13:03:49.714345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.551 [2024-07-15 13:03:49.722309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.551 [2024-07-15 13:03:49.722670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.552 [2024-07-15 13:03:49.722696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.552 [2024-07-15 13:03:49.731063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.552 [2024-07-15 13:03:49.731397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.552 [2024-07-15 13:03:49.731423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.552 [2024-07-15 13:03:49.740852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.552 [2024-07-15 13:03:49.741127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.552 [2024-07-15 13:03:49.741161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.552 [2024-07-15 13:03:49.748817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.552 [2024-07-15 13:03:49.749091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.552 [2024-07-15 13:03:49.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.552 [2024-07-15 13:03:49.755469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.552 [2024-07-15 13:03:49.755753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.552 [2024-07-15 13:03:49.755792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.762074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.762319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.762345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.768303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.768545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.768571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.774928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.775189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.775215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.781295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.781537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.788267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.788506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.788533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.794696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.794981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.795009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.801151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.801403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.801428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.807599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.807882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.807909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.813765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.814024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.814051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.820654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.820935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.820963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.828080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.828332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.828358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.834636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.834940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.834967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.841379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.841633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.841659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.848061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.848371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.854976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.855249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.855275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.861471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.861729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.861778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.867922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.868203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.868229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.874254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.874497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.874523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.880877] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.881241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.881277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.888584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.888974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.889002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.895847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.896179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.896206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.903834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.904190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.904218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.911494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.911899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.911926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.918752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.919061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.919108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.925829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.926108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.926135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.932263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.932505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.932531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.937856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.938132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.938158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.944114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.944455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.944482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.949948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.950206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.950233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.955752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.956005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.956032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.961520] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.961834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.961862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.967471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.967712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.967746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.972991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.973268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.973292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.978568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.978838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.978865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.984158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.984425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.990115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.990364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.990390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:49.995670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:49.995962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:49.995990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:50.001203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:50.001531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:50.001563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:50.006942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:50.007321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:50.007359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:31.811 [2024-07-15 13:03:50.013656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:31.811 [2024-07-15 13:03:50.013964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.811 [2024-07-15 13:03:50.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.087 [2024-07-15 13:03:50.019658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.087 [2024-07-15 13:03:50.019957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.025472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.025758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.025786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.031339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.031613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.031666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.038394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.038658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.038685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.045716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.045999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.046035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.052519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.052809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.052836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.059078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.059367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.059394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.065967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.066209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.066235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.072376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.072629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.072655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.078831] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.079097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.085645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.085940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.085967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.092561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.092831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.092860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.099650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.099940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.099967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.106798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.107066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.107092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.113600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.113865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.113891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.120256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.120520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.120546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.126687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.126981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.127008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.133886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.134152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.134177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.140922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.141186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.141211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.147795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.148047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.148072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.154803] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.155079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.155107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.162097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.162360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.162395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.169775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.170026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.170052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.176897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.088 [2024-07-15 13:03:50.177147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.088 [2024-07-15 13:03:50.177173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.088 [2024-07-15 13:03:50.184056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.184316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.184342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.190905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.191175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.191200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.198412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.198676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.198702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.205324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.205592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.205618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.211606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.211883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.211909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.217696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.217985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.218012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.223876] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.224127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.224152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.229914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.230166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.230191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.236027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.236292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.236317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.242224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.242507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.242533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.248385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.248651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.248676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.255196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.255460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.255491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.261196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.261460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.261486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.267134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.267398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.267423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.273328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.273592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.273617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.279382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.279649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.279674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.285330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.285599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.285623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.089 [2024-07-15 13:03:50.291483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.089 [2024-07-15 13:03:50.291793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.089 [2024-07-15 13:03:50.291819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.297893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.298174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.298200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.304064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.304329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.304354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.310325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.310600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.310625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.316406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.316672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.316698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.322715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.323001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.323027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.328904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.329154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.329179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.335109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.335376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.335401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.340993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.341243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.341268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.347080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.347343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.347369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.353094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.353357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.353382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.359057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.359317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.348 [2024-07-15 13:03:50.359342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.348 [2024-07-15 13:03:50.365182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.348 [2024-07-15 13:03:50.365443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.365468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.371380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.371643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.371668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.377351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.377614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.377639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.383318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.383581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.383606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.389300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.389564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.389589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.395439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.395703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.395750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.401266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.401527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.407184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.407452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.407478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.413316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.413584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.413615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.419354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.419644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.425023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.425295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.425320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.430663] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.430948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.430975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.436439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.436701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.436726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.442317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.442609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.448172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.448435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.448460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.453963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.454216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.454241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.459780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.460029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.460055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.465301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.465566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.465592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.471197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.471458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.471484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.476694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.476990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.477017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.482322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.482583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.482608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.487979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.488229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.488255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.493915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.494197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.494223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.499844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.500097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.500122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.505607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.505929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.505956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.511514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.511798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.511825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.517351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.517623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.517649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.523006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.523270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.523295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.528558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.528825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.528850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.534240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.534498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.534524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.539699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.539986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.540012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.545410] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.545671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.545696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.349 [2024-07-15 13:03:50.552141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.349 [2024-07-15 13:03:50.552471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.349 [2024-07-15 13:03:50.552512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.559374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.608 [2024-07-15 13:03:50.559638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.608 [2024-07-15 13:03:50.559663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.566116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.608 [2024-07-15 13:03:50.566424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.608 [2024-07-15 13:03:50.566455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.573452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.608 [2024-07-15 13:03:50.573713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.608 [2024-07-15 13:03:50.573745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.580307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.608 [2024-07-15 13:03:50.580581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.608 [2024-07-15 13:03:50.580607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.588388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.608 [2024-07-15 13:03:50.588763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.608 [2024-07-15 13:03:50.588796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.608 [2024-07-15 13:03:50.595206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.595462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.595487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.600884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.601173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.606248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.606502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.606528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.611837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.612080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.612105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.617510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.617774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.617800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.623097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.623374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.623399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.629254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.629597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.629622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.635669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.635949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.635976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.641290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.641546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.641572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.647024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.647283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.647308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.652765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.653008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.653033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.658515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.658793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.658820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.664245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.664499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.664524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.669919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.670183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.670209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.675577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.675840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.675865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.683091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.683412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.683438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.690331] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.690590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.690616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.696560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.696826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.696852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.702325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.702586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.702611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.707958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.708231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.713943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.714192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.714217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.721317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.721578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.721603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.727425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.727687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.727719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.734296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.734559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.734584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.741377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.741639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.741664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.748477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.748793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.748821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.754506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.754775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.760451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.760716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.760763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.766425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.766690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.609 [2024-07-15 13:03:50.766715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.609 [2024-07-15 13:03:50.772426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.609 [2024-07-15 13:03:50.772689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.772729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.778353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.778616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.778642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.784544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.784838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.784865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.790950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.791201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.791226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.797507] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.797777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.797803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.805080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.805427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.805452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.610 [2024-07-15 13:03:50.813141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.610 [2024-07-15 13:03:50.813563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.610 [2024-07-15 13:03:50.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.821327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.821706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.867 [2024-07-15 13:03:50.821732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.829376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.829717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.867 [2024-07-15 13:03:50.829750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.838041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.838424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.867 [2024-07-15 13:03:50.838458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.847251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.847579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.867 [2024-07-15 13:03:50.847614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.856043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.856399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.867 [2024-07-15 13:03:50.856438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.867 [2024-07-15 13:03:50.865292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.867 [2024-07-15 13:03:50.865597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.865623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.874371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.874735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.874766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.883362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.883694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.883734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.892052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.892403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.892429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.899939] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.900199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.900225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.906425] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.906734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.906769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.913292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.913565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.913590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.920220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.920487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.920512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.926686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.926977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.927004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.933105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.933359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.933384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.939555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.939834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.939861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.945377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.945649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.945675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.951845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.952105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.952131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.958381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.958636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.958663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:32.868 [2024-07-15 13:03:50.964189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x9702e0) with pdu=0x2000190fef90 00:24:32.868 [2024-07-15 13:03:50.964335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:32.868 [2024-07-15 13:03:50.964362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:32.868 00:24:32.868 Latency(us) 00:24:32.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.868 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:32.868 nvme0n1 : 2.00 4475.11 559.39 0.00 0.00 3567.35 2548.62 10582.85 00:24:32.868 =================================================================================================================== 00:24:32.868 Total : 4475.11 559.39 0.00 0.00 3567.35 2548.62 10582.85 00:24:32.868 0 00:24:32.868 13:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:32.868 13:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:32.868 13:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:32.868 13:03:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:32.868 | .driver_specific 00:24:32.868 | .nvme_error 00:24:32.868 | .status_code 00:24:32.868 | .command_transient_transport_error' 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 289 > 0 )) 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3491931 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3491931 ']' 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3491931 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491931 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491931' 00:24:33.126 killing process with pid 3491931 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3491931 00:24:33.126 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.126 00:24:33.126 Latency(us) 00:24:33.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.126 =================================================================================================================== 00:24:33.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.126 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3491931 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3490549 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3490549 ']' 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3490549 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3490549 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.383 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.384 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3490549' 00:24:33.384 killing process with pid 3490549 00:24:33.384 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3490549 00:24:33.384 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3490549 00:24:33.949 00:24:33.949 real 0m15.959s 00:24:33.949 user 0m31.014s 00:24:33.949 sys 0m5.151s 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.949 ************************************ 00:24:33.949 END TEST nvmf_digest_error 00:24:33.949 ************************************ 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:33.949 rmmod nvme_tcp 00:24:33.949 rmmod nvme_fabrics 00:24:33.949 rmmod nvme_keyring 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3490549 ']' 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3490549 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3490549 ']' 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3490549 00:24:33.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3490549) - No such process 00:24:33.949 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3490549 is not found' 00:24:33.950 Process with pid 3490549 is not found 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.950 13:03:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.848 13:03:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:35.848 00:24:35.848 real 0m36.536s 00:24:35.848 user 1m2.377s 00:24:35.848 sys 0m11.933s 00:24:35.848 13:03:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:35.848 13:03:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:35.848 ************************************ 00:24:35.848 END TEST nvmf_digest 00:24:35.848 ************************************ 00:24:35.848 13:03:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:35.848 13:03:54 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:24:35.848 13:03:54 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:24:35.848 13:03:54 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:24:35.848 13:03:54 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:35.848 13:03:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:35.848 13:03:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.848 13:03:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.848 ************************************ 00:24:35.848 START TEST nvmf_bdevperf 00:24:35.848 ************************************ 00:24:35.848 13:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:36.106 * Looking for test storage... 00:24:36.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:36.106 13:03:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:38.012 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:38.012 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:38.012 Found net devices under 0000:84:00.0: cvl_0_0 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:38.012 Found net devices under 0000:84:00.1: cvl_0_1 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.012 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:38.270 00:24:38.270 --- 10.0.0.2 ping statistics --- 00:24:38.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.270 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:38.270 00:24:38.270 --- 10.0.0.1 ping statistics --- 00:24:38.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.270 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3494411 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3494411 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3494411 ']' 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.270 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.270 [2024-07-15 13:03:56.369421] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:38.270 [2024-07-15 13:03:56.369505] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.270 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.270 [2024-07-15 13:03:56.431183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:38.526 [2024-07-15 13:03:56.533758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.526 [2024-07-15 13:03:56.533814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.526 [2024-07-15 13:03:56.533841] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.526 [2024-07-15 13:03:56.533853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.526 [2024-07-15 13:03:56.533862] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.526 [2024-07-15 13:03:56.533951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.526 [2024-07-15 13:03:56.534293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.526 [2024-07-15 13:03:56.534297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.526 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.527 [2024-07-15 13:03:56.675829] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.527 Malloc0 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.527 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:38.786 [2024-07-15 13:03:56.743014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:38.786 { 00:24:38.786 "params": { 00:24:38.786 "name": "Nvme$subsystem", 00:24:38.786 "trtype": "$TEST_TRANSPORT", 00:24:38.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:38.786 "adrfam": "ipv4", 00:24:38.786 "trsvcid": "$NVMF_PORT", 00:24:38.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:38.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:38.786 "hdgst": ${hdgst:-false}, 00:24:38.786 "ddgst": ${ddgst:-false} 00:24:38.786 }, 00:24:38.786 "method": "bdev_nvme_attach_controller" 00:24:38.786 } 00:24:38.786 EOF 00:24:38.786 )") 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:38.786 13:03:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:38.786 "params": { 00:24:38.786 "name": "Nvme1", 00:24:38.786 "trtype": "tcp", 00:24:38.786 "traddr": "10.0.0.2", 00:24:38.786 "adrfam": "ipv4", 00:24:38.786 "trsvcid": "4420", 00:24:38.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:38.786 "hdgst": false, 00:24:38.786 "ddgst": false 00:24:38.786 }, 00:24:38.786 "method": "bdev_nvme_attach_controller" 00:24:38.786 }' 00:24:38.786 [2024-07-15 13:03:56.791662] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:38.786 [2024-07-15 13:03:56.791771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494443 ] 00:24:38.786 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.786 [2024-07-15 13:03:56.855189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.786 [2024-07-15 13:03:56.968537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.352 Running I/O for 1 seconds... 00:24:40.289 00:24:40.289 Latency(us) 00:24:40.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.289 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.289 Verification LBA range: start 0x0 length 0x4000 00:24:40.289 Nvme1n1 : 1.01 8803.82 34.39 0.00 0.00 14483.64 1759.76 13301.38 00:24:40.289 =================================================================================================================== 00:24:40.289 Total : 8803.82 34.39 0.00 0.00 14483.64 1759.76 13301.38 00:24:40.549 13:03:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3494696 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:40.550 { 00:24:40.550 "params": { 00:24:40.550 "name": "Nvme$subsystem", 00:24:40.550 "trtype": "$TEST_TRANSPORT", 00:24:40.550 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:40.550 "adrfam": "ipv4", 00:24:40.550 "trsvcid": "$NVMF_PORT", 00:24:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:40.550 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:40.550 "hdgst": ${hdgst:-false}, 00:24:40.550 "ddgst": ${ddgst:-false} 00:24:40.550 }, 00:24:40.550 "method": "bdev_nvme_attach_controller" 00:24:40.550 } 00:24:40.550 EOF 00:24:40.550 )") 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:40.550 13:03:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:40.550 "params": { 00:24:40.550 "name": "Nvme1", 00:24:40.550 "trtype": "tcp", 00:24:40.550 "traddr": "10.0.0.2", 00:24:40.550 "adrfam": "ipv4", 00:24:40.550 "trsvcid": "4420", 00:24:40.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.550 "hdgst": false, 00:24:40.550 "ddgst": false 00:24:40.550 }, 00:24:40.550 "method": "bdev_nvme_attach_controller" 00:24:40.550 }' 00:24:40.550 [2024-07-15 13:03:58.608802] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:40.550 [2024-07-15 13:03:58.608882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494696 ] 00:24:40.550 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.550 [2024-07-15 13:03:58.668546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.807 [2024-07-15 13:03:58.778962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.065 Running I/O for 15 seconds... 00:24:43.600 13:04:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3494411 00:24:43.600 13:04:01 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:43.600 [2024-07-15 13:04:01.575459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.600 [2024-07-15 13:04:01.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.600 [2024-07-15 13:04:01.575705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.575973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.575989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.601 [2024-07-15 13:04:01.576666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.601 [2024-07-15 13:04:01.576693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.601 [2024-07-15 13:04:01.576707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.576982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.576997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.602 [2024-07-15 13:04:01.577294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.602 [2024-07-15 13:04:01.577628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.602 [2024-07-15 13:04:01.577642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.603 [2024-07-15 13:04:01.577899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.577967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.577989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.603 [2024-07-15 13:04:01.578360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.603 [2024-07-15 13:04:01.578547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.603 [2024-07-15 13:04:01.578560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.578980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.578994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:45680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:45704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:45712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:43.604 [2024-07-15 13:04:01.579248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2e60 is same with the state(5) to be set 00:24:43.604 [2024-07-15 13:04:01.579275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:43.604 [2024-07-15 13:04:01.579286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:43.604 [2024-07-15 13:04:01.579296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45744 len:8 PRP1 0x0 PRP2 0x0 00:24:43.604 [2024-07-15 13:04:01.579312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:43.604 [2024-07-15 13:04:01.579369] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xde2e60 was disconnected and freed. reset controller. 00:24:43.604 [2024-07-15 13:04:01.582477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.604 [2024-07-15 13:04:01.582543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.604 [2024-07-15 13:04:01.583213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.604 [2024-07-15 13:04:01.583239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.604 [2024-07-15 13:04:01.583253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.604 [2024-07-15 13:04:01.583444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.604 [2024-07-15 13:04:01.583640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.604 [2024-07-15 13:04:01.583657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.604 [2024-07-15 13:04:01.583672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.604 [2024-07-15 13:04:01.586911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.604 [2024-07-15 13:04:01.595915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.604 [2024-07-15 13:04:01.596372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.604 [2024-07-15 13:04:01.596397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.604 [2024-07-15 13:04:01.596425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.604 [2024-07-15 13:04:01.596613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.604 [2024-07-15 13:04:01.596840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.604 [2024-07-15 13:04:01.596861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.596874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.599798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.609072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.609523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.609561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.609576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.609802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.610000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.610018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.610031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.612919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.622271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.622686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.622710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.622745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.622959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.623169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.623187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.623199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.626011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.635328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.635770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.635808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.635823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.636011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.636203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.636221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.636233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.639125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.648422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.648858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.648898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.648913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.649119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.649311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.649329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.649341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.652235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.661495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.661834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.661858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.661873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.662061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.662253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.662270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.662282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.665217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.674597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.674973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.675012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.675030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.675232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.675424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.675442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.675454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.678344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.687733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.688093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.688117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.688145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.688347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.688539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.688557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.688569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.691461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.700940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.605 [2024-07-15 13:04:01.701392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.605 [2024-07-15 13:04:01.701415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.605 [2024-07-15 13:04:01.701443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.605 [2024-07-15 13:04:01.701632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.605 [2024-07-15 13:04:01.701853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.605 [2024-07-15 13:04:01.701873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.605 [2024-07-15 13:04:01.701885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.605 [2024-07-15 13:04:01.704711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.605 [2024-07-15 13:04:01.714114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.714540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.714564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.714593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.714814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.715038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.715077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.715091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.718230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.727356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.727758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.727799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.727815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.728049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.728241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.728259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.728271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.731225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.740673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.741118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.741156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.741170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.741371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.741563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.741580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.741592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.744568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.753785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.754220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.754258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.754272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.754461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.754651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.754669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.754681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.757573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.766926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.767321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.767359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.767372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.767574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.767793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.767812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.767825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.770650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.780145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.606 [2024-07-15 13:04:01.780585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.606 [2024-07-15 13:04:01.780623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.606 [2024-07-15 13:04:01.780638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.606 [2024-07-15 13:04:01.780854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.606 [2024-07-15 13:04:01.781067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.606 [2024-07-15 13:04:01.781085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.606 [2024-07-15 13:04:01.781097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.606 [2024-07-15 13:04:01.783967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.606 [2024-07-15 13:04:01.793135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.607 [2024-07-15 13:04:01.793502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.607 [2024-07-15 13:04:01.793540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.607 [2024-07-15 13:04:01.793553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.607 [2024-07-15 13:04:01.793781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.607 [2024-07-15 13:04:01.793979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.607 [2024-07-15 13:04:01.793997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.607 [2024-07-15 13:04:01.794009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.607 [2024-07-15 13:04:01.796931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.806337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.806774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.806812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.806827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.807028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.807220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.807238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.807249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.810144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.819480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.819900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.819924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.819952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.820141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.820332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.820350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.820362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.823249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.832522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.832947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.832986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.833000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.833209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.833434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.833453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.833465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.836834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.846259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.846731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.846777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.846791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.847006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.847230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.847248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.847264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.850233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.859495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.859905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.859930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.859958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.860183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.860376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.860394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.860406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.863350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.872497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.872931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.872969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.872984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.873172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.868 [2024-07-15 13:04:01.873363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.868 [2024-07-15 13:04:01.873381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.868 [2024-07-15 13:04:01.873393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.868 [2024-07-15 13:04:01.876285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.868 [2024-07-15 13:04:01.885584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.868 [2024-07-15 13:04:01.885986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.868 [2024-07-15 13:04:01.886010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.868 [2024-07-15 13:04:01.886024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.868 [2024-07-15 13:04:01.886212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.886403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.886421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.886433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.889284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.898819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.899243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.899285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.899301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.899489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.899681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.899699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.899710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.902593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.911919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.912339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.912363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.912391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.912580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.912799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.912819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.912832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.915736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.925004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.925447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.925484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.925498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.925686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.925908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.925928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.925940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.928828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.938156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.938533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.938571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.938585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.938814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.939018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.939037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.939064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.941940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.951265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.951728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.951793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.951807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.952015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.952223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.952241] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.952253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.955107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.964455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.964901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.964926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.964954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.965142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.965334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.965368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.965380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.968299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.977449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.977906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.977944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.977959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.978147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.978338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.978356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.978368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.981266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:01.990591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:01.991078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:01.991102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:01.991115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:01.991318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:01.991509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:01.991527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:01.991539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:01.994395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:02.003670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:02.004171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:02.004208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:02.004223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:02.004410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:02.004601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:02.004619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:02.004632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:02.007527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:02.016864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:02.017337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:02.017375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:02.017389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.869 [2024-07-15 13:04:02.017578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.869 [2024-07-15 13:04:02.017797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.869 [2024-07-15 13:04:02.017817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.869 [2024-07-15 13:04:02.017830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.869 [2024-07-15 13:04:02.020619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.869 [2024-07-15 13:04:02.029979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.869 [2024-07-15 13:04:02.030407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.869 [2024-07-15 13:04:02.030430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.869 [2024-07-15 13:04:02.030465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.870 [2024-07-15 13:04:02.030655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.870 [2024-07-15 13:04:02.030875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.870 [2024-07-15 13:04:02.030895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.870 [2024-07-15 13:04:02.030908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.870 [2024-07-15 13:04:02.033793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.870 [2024-07-15 13:04:02.043114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.870 [2024-07-15 13:04:02.043570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.870 [2024-07-15 13:04:02.043609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.870 [2024-07-15 13:04:02.043623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.870 [2024-07-15 13:04:02.043840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.870 [2024-07-15 13:04:02.044039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.870 [2024-07-15 13:04:02.044058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.870 [2024-07-15 13:04:02.044070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.870 [2024-07-15 13:04:02.046897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.870 [2024-07-15 13:04:02.056219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.870 [2024-07-15 13:04:02.056681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.870 [2024-07-15 13:04:02.056729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.870 [2024-07-15 13:04:02.056753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.870 [2024-07-15 13:04:02.056962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.870 [2024-07-15 13:04:02.057171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.870 [2024-07-15 13:04:02.057189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.870 [2024-07-15 13:04:02.057202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.870 [2024-07-15 13:04:02.060093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.870 [2024-07-15 13:04:02.069523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.870 [2024-07-15 13:04:02.069978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.870 [2024-07-15 13:04:02.070027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:43.870 [2024-07-15 13:04:02.070042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:43.870 [2024-07-15 13:04:02.070256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:43.870 [2024-07-15 13:04:02.070474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.870 [2024-07-15 13:04:02.070497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.870 [2024-07-15 13:04:02.070510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.073568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.082687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.083167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.083216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.083230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.083432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.083624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.083657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.083669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.086889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.096299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.096775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.096815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.096830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.097043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.097252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.097270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.097282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.100261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.109557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.110044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.110068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.110096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.110285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.110476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.110494] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.110506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.113441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.122675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.123105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.123143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.123157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.123360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.123551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.123568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.123580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.126512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.135703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.136164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.136188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.136217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.136406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.136597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.136615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.136627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.139519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.148840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.149317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.149341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.149369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.149558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.149777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.149797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.131 [2024-07-15 13:04:02.149810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.131 [2024-07-15 13:04:02.152637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.131 [2024-07-15 13:04:02.161949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.131 [2024-07-15 13:04:02.162430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.131 [2024-07-15 13:04:02.162453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.131 [2024-07-15 13:04:02.162482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.131 [2024-07-15 13:04:02.162674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.131 [2024-07-15 13:04:02.162896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.131 [2024-07-15 13:04:02.162916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.162929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.165837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.175026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.175496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.175544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.175558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.175787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.175985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.176003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.176015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.178903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.188216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.188675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.188725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.188746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.188957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.189166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.189185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.189197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.192009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.201322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.201789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.201821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.201834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.202038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.202230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.202248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.202265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.205041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.214382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.214866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.214890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.214919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.215107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.215299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.215317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.215329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.218268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.227610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.228080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.228104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.228132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.228321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.228512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.228530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.228542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.231460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.240582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.241040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.241078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.241093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.241281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.241472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.241490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.241501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.244396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.253705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.254181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.254209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.254238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.254427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.254617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.254635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.254646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.257504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.266826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.267256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.267292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.267307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.267496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.267688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.267705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.267717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.270610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.279842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.280280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.280317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.280332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.280520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.280711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.280730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.280767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.283640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.292868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.293271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.293316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.293330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.293531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.293727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.293755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.293768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.296536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.305978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.306432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.306498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.306699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.306920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.306940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.306952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.309879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.319066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.319529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.319579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.319592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.319822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.320020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.320039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.320065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.132 [2024-07-15 13:04:02.322939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.132 [2024-07-15 13:04:02.332335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.132 [2024-07-15 13:04:02.332813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.132 [2024-07-15 13:04:02.332837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.132 [2024-07-15 13:04:02.332864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.132 [2024-07-15 13:04:02.333089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.132 [2024-07-15 13:04:02.333310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.132 [2024-07-15 13:04:02.333330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.132 [2024-07-15 13:04:02.333343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.394 [2024-07-15 13:04:02.336663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.394 [2024-07-15 13:04:02.345891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.394 [2024-07-15 13:04:02.346376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.394 [2024-07-15 13:04:02.346401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.394 [2024-07-15 13:04:02.346431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.394 [2024-07-15 13:04:02.346662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.394 [2024-07-15 13:04:02.346887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.394 [2024-07-15 13:04:02.346908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.394 [2024-07-15 13:04:02.346921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.394 [2024-07-15 13:04:02.349917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.394 [2024-07-15 13:04:02.359067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.394 [2024-07-15 13:04:02.359526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.394 [2024-07-15 13:04:02.359574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.394 [2024-07-15 13:04:02.359589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.394 [2024-07-15 13:04:02.359809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.394 [2024-07-15 13:04:02.360013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.394 [2024-07-15 13:04:02.360046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.394 [2024-07-15 13:04:02.360058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.394 [2024-07-15 13:04:02.362972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.394 [2024-07-15 13:04:02.372242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.394 [2024-07-15 13:04:02.372606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.394 [2024-07-15 13:04:02.372644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.372659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.372896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.373115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.373134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.373147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.376081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.385447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.385770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.385796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.385816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.386011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.386220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.386238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.386250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.389143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.398670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.399035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.399074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.399090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.399279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.399471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.399489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.399501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.402803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.411835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.412269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.412292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.412320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.412508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.412699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.412718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.412755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.415628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.424982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.425342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.425367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.425382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.425586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.425807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.425842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.425855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.428735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.438266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.438691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.438855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.438873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.439082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.439275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.439293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.439304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.442235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.451447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.451820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.451861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.451876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.452105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.452303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.452321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.452334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.455428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.465167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.465566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.465607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.465621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.465873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.466105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.466140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.466153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.469244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.478501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.478836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.478864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.478881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.479119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.479311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.479329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.479341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.482355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.491699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.492124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.492162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.492176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.492377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.492569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.492586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.492598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.495574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.505073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.505581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.395 [2024-07-15 13:04:02.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.395 [2024-07-15 13:04:02.505650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.395 [2024-07-15 13:04:02.505899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.395 [2024-07-15 13:04:02.506142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.395 [2024-07-15 13:04:02.506160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.395 [2024-07-15 13:04:02.506172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.395 [2024-07-15 13:04:02.509136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.395 [2024-07-15 13:04:02.518362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.395 [2024-07-15 13:04:02.518735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.518769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.518797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.519010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.519219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.519238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.519250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.522139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.531551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.531957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.531981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.531995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.532215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.532406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.532424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.532436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.535328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.544675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.545125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.545162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.545176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.545378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.545569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.545587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.545599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.548495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.557819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.558260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.558298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.558313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.558501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.558692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.558710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.558751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.561629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.570879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.571337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.571360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.571389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.571577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.571796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.571816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.571829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.574691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.583921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.584346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.584385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.584399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.584587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.584822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.584843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.584856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.396 [2024-07-15 13:04:02.588127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.396 [2024-07-15 13:04:02.597640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.396 [2024-07-15 13:04:02.598123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.396 [2024-07-15 13:04:02.598147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.396 [2024-07-15 13:04:02.598174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.396 [2024-07-15 13:04:02.598382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.396 [2024-07-15 13:04:02.598600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.396 [2024-07-15 13:04:02.598620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.396 [2024-07-15 13:04:02.598632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.601683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.610915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.611354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.657 [2024-07-15 13:04:02.611396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.657 [2024-07-15 13:04:02.611411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.657 [2024-07-15 13:04:02.611600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.657 [2024-07-15 13:04:02.611837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.657 [2024-07-15 13:04:02.611857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.657 [2024-07-15 13:04:02.611870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.614814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.623902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.624360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.657 [2024-07-15 13:04:02.624399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.657 [2024-07-15 13:04:02.624413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.657 [2024-07-15 13:04:02.624601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.657 [2024-07-15 13:04:02.624803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.657 [2024-07-15 13:04:02.624822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.657 [2024-07-15 13:04:02.624834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.627642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.637146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.637601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.657 [2024-07-15 13:04:02.637650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.657 [2024-07-15 13:04:02.637664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.657 [2024-07-15 13:04:02.637896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.657 [2024-07-15 13:04:02.638109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.657 [2024-07-15 13:04:02.638127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.657 [2024-07-15 13:04:02.638139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.641029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.650180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.650645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.657 [2024-07-15 13:04:02.650682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.657 [2024-07-15 13:04:02.650697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.657 [2024-07-15 13:04:02.650916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.657 [2024-07-15 13:04:02.651133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.657 [2024-07-15 13:04:02.651151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.657 [2024-07-15 13:04:02.651164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.654053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.663352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.663813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.657 [2024-07-15 13:04:02.663838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.657 [2024-07-15 13:04:02.663866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.657 [2024-07-15 13:04:02.664074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.657 [2024-07-15 13:04:02.664266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.657 [2024-07-15 13:04:02.664284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.657 [2024-07-15 13:04:02.664296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.657 [2024-07-15 13:04:02.667150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.657 [2024-07-15 13:04:02.676424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.657 [2024-07-15 13:04:02.676875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.676913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.676927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.677115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.677307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.677325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.677336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.680229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.689527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.690006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.690043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.690056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.690258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.690449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.690467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.690479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.693382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.702641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.703080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.703104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.703118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.703307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.703498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.703516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.703528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.706421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.715903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.716354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.716377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.716406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.716610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.716840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.716861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.716874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.719970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.729249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.729695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.729735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.729759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.729973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.730202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.730220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.730232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.733262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.742502] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.742996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.743023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.743055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.743245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.743436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.743454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.743466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.746405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.755873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.756362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.756400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.756414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.756603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.756823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.756843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.756856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.759742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.768989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.769426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.769463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.769478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.769666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.769888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.769908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.769921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.772807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.782122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.782532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.782583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.782596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.782828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.783025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.783048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.783061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.785887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.795157] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.795619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.795667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.795680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.795914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.796126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.796144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.796156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.799033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.808361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.808804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.658 [2024-07-15 13:04:02.808828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.658 [2024-07-15 13:04:02.808858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.658 [2024-07-15 13:04:02.809067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.658 [2024-07-15 13:04:02.809258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.658 [2024-07-15 13:04:02.809277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.658 [2024-07-15 13:04:02.809289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.658 [2024-07-15 13:04:02.812108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.658 [2024-07-15 13:04:02.821458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.658 [2024-07-15 13:04:02.821908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.659 [2024-07-15 13:04:02.821946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.659 [2024-07-15 13:04:02.821960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.659 [2024-07-15 13:04:02.822149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.659 [2024-07-15 13:04:02.822340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.659 [2024-07-15 13:04:02.822358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.659 [2024-07-15 13:04:02.822370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.659 [2024-07-15 13:04:02.825264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.659 [2024-07-15 13:04:02.834566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.659 [2024-07-15 13:04:02.835056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.659 [2024-07-15 13:04:02.835095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.659 [2024-07-15 13:04:02.835109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.659 [2024-07-15 13:04:02.835312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.659 [2024-07-15 13:04:02.835504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.659 [2024-07-15 13:04:02.835522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.659 [2024-07-15 13:04:02.835534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.659 [2024-07-15 13:04:02.838805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.659 [2024-07-15 13:04:02.848169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.659 [2024-07-15 13:04:02.848661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.659 [2024-07-15 13:04:02.848703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.659 [2024-07-15 13:04:02.848717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.659 [2024-07-15 13:04:02.848939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.659 [2024-07-15 13:04:02.849153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.659 [2024-07-15 13:04:02.849171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.659 [2024-07-15 13:04:02.849183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.659 [2024-07-15 13:04:02.852133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.659 [2024-07-15 13:04:02.861693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.659 [2024-07-15 13:04:02.862100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.659 [2024-07-15 13:04:02.862125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.659 [2024-07-15 13:04:02.862139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.862327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.862520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.862539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.862551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.865519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.874841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.875280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.875304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.875333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.875533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.875730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.875758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.875772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.878621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.888126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.888508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.888547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.888561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.888795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.888999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.889033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.889046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.892200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.901471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.901899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.901940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.901956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.902170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.902368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.902386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.902398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.905429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.914754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.915139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.915178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.915192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.915400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.915598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.915617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.915633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.918659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.927983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.928471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.928495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.928524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.928733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.928946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.928965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.928978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.931956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.941220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.941641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.941694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.941921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.942137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.942156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.942168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.945145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.954553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.954969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.954994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.955023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.955234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.955431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.955450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.955462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.958479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.967955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.968427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.968469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.968485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.968679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.968908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.968928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.968941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.971914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.981165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.920 [2024-07-15 13:04:02.981636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.920 [2024-07-15 13:04:02.981674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.920 [2024-07-15 13:04:02.981689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.920 [2024-07-15 13:04:02.981913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.920 [2024-07-15 13:04:02.982131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.920 [2024-07-15 13:04:02.982150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.920 [2024-07-15 13:04:02.982162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.920 [2024-07-15 13:04:02.985136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.920 [2024-07-15 13:04:02.994677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:02.995182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:02.995208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:02.995239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:02.995447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:02.995657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:02.995677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:02.995690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:02.998937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.008123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.008540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.008564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.008594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.008813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.009022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.009057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.009069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.012210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.021616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.022013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.022042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.022074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.022282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.022492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.022511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.022525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.025658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.034975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.035413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.035437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.035451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.035659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.035884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.035905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.035917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.038949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.048277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.048708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.048753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.048770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.048991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.049208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.049227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.049239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.052215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.061462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.061883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.061909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.061940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.062171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.062368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.062387] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.062399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.065371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.074657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.075072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.075097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.075112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.075306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.075503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.075521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.075534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.078522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.087953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.088347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.088387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.088402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.088596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.088827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.088847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.088861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.092227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.101620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.102115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.102142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.102178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.102403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.102652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.102672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.102684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.105759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.921 [2024-07-15 13:04:03.114905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.921 [2024-07-15 13:04:03.115384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.921 [2024-07-15 13:04:03.115419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:44.921 [2024-07-15 13:04:03.115448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:44.921 [2024-07-15 13:04:03.115642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:44.921 [2024-07-15 13:04:03.115874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.921 [2024-07-15 13:04:03.115895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.921 [2024-07-15 13:04:03.115908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.921 [2024-07-15 13:04:03.118993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.128242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.128701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.128745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.128762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.128976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.129207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.129227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.129239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.132384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.141454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.141898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.141937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.141953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.142164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.142361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.142384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.142397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.145376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.154802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.155219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.155243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.155271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.155465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.155662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.155681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.155693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.158666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.168190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.168669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.168707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.168723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.168944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.169161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.169180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.169192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.172165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.181399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.181859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.181900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.181916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.182128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.182325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.182344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.182356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.185334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.194732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.195178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.195208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.195238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.195432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.195628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.195647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.195659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.198636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.208062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.208502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.208526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.208555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.208776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.208980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.208999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.209012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.211984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.221336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.221788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.221827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.221843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.222057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.222254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.222272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.222284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.225262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.234662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.235128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.235152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.235166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.235380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.235577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.235595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.235607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.238613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.247854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.248314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.248352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.248367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.248560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.248786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.248806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.248818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.251793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.261049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.261497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.261536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.261551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.261768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.261973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.261992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.262005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.264974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.274257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.274745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.274770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.274798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.274999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.275213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.275232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.275249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.278223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.287460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.181 [2024-07-15 13:04:03.287937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.181 [2024-07-15 13:04:03.287963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.181 [2024-07-15 13:04:03.287994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.181 [2024-07-15 13:04:03.288204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.181 [2024-07-15 13:04:03.288401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.181 [2024-07-15 13:04:03.288420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.181 [2024-07-15 13:04:03.288433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.181 [2024-07-15 13:04:03.291409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.181 [2024-07-15 13:04:03.300689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.301173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.301197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.301226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.301420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.301635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.301655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.301668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.304663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.313967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.314417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.314442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.314470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.314664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.314892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.314912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.314925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.317940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.327196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.327612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.327640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.327670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.327893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.328111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.328130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.328143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.331114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.340514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.340987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.341026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.341041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.341268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.341465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.341483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.341496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.344935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.353804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.354247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.354271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.354300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.354494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.354691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.354709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.354743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.357797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.367167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.367582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.367612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.367641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.367886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.368112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.368131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.368143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.371115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.182 [2024-07-15 13:04:03.380376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.182 [2024-07-15 13:04:03.380800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.182 [2024-07-15 13:04:03.380834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.182 [2024-07-15 13:04:03.380864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.182 [2024-07-15 13:04:03.381079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.182 [2024-07-15 13:04:03.381276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.182 [2024-07-15 13:04:03.381295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.182 [2024-07-15 13:04:03.381307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.182 [2024-07-15 13:04:03.384498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.393715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.394159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.394197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.394212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.394419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.394617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.394636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.394649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.397627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.407031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.407450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.407489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.407504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.407727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.407941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.407961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.407974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.410966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.420266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.420706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.420752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.420768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.420981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.421194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.421213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.421226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.424206] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.433441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.433892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.433931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.433946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.434157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.434355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.434373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.434386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.437364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.446786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.447275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.447299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.447328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.447522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.447734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.447762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.447775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.450747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.441 [2024-07-15 13:04:03.459990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.441 [2024-07-15 13:04:03.460390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.441 [2024-07-15 13:04:03.460414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.441 [2024-07-15 13:04:03.460448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.441 [2024-07-15 13:04:03.460643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.441 [2024-07-15 13:04:03.460871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.441 [2024-07-15 13:04:03.460891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.441 [2024-07-15 13:04:03.460905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.441 [2024-07-15 13:04:03.463879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.473319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.473760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.473799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.473814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.474028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.474241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.474260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.474272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.477249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.486492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.486985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.487011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.487040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.487250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.487447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.487465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.487478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.490451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.499691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.500182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.500206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.500236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.500430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.500627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.500650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.500663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.503627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.513034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.513417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.513442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.513456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.513650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.513875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.513896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.513909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.516887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.526261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.526618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.526655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.526669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.526899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.527146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.527166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.527180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.530264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.539660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.540027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.540054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.540085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.540286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.540489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.540509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.540521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.543866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.552973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.553382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.553408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.553423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.553623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.553855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.553876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.553890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.556951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.566298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.566671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.566710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.566724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.566949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.567169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.567187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.567200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.570321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.579568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.579927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.579954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.579984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.580216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.580419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.580439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.580451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.583460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.592913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.593315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.593340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.593355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.593593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.593834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.593856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.593870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.597293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.606222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.606590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.606615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.606630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.442 [2024-07-15 13:04:03.606862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.442 [2024-07-15 13:04:03.607090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.442 [2024-07-15 13:04:03.607109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.442 [2024-07-15 13:04:03.607122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.442 [2024-07-15 13:04:03.610193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.442 [2024-07-15 13:04:03.619593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.442 [2024-07-15 13:04:03.619961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.442 [2024-07-15 13:04:03.619989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.442 [2024-07-15 13:04:03.620005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.443 [2024-07-15 13:04:03.620232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.443 [2024-07-15 13:04:03.620430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.443 [2024-07-15 13:04:03.620449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.443 [2024-07-15 13:04:03.620461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.443 [2024-07-15 13:04:03.623504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.443 [2024-07-15 13:04:03.633049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.443 [2024-07-15 13:04:03.633366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.443 [2024-07-15 13:04:03.633391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.443 [2024-07-15 13:04:03.633407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.443 [2024-07-15 13:04:03.633601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.443 [2024-07-15 13:04:03.633832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.443 [2024-07-15 13:04:03.633854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.443 [2024-07-15 13:04:03.633873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.443 [2024-07-15 13:04:03.636959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.443 [2024-07-15 13:04:03.646545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.646891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.646933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.647145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.647363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.647382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.647395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.650383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.659820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.660232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.660256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.660270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.660479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.660676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.660694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.660706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.663700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.673072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.673436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.673489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.673697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.673926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.673946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.673960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.676933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.686438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.686869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.686908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.686938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.687149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.687347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.687366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.687378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.690326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.699751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.700177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.700201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.700229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.700422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.700620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.700638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.700651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.703613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.713083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.713503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.713527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.713557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.713779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.713983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.714002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.714015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.716993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.726295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.726679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.726718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.726732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.726958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.727179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.727198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.727210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.730184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.739616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.739967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.739993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.740007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.740217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.740415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.740433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.740446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.743461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.752902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.753340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.753378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.753393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.703 [2024-07-15 13:04:03.753587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.703 [2024-07-15 13:04:03.753812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.703 [2024-07-15 13:04:03.753832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.703 [2024-07-15 13:04:03.753845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.703 [2024-07-15 13:04:03.756820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.703 [2024-07-15 13:04:03.766249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.703 [2024-07-15 13:04:03.766699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.703 [2024-07-15 13:04:03.766744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.703 [2024-07-15 13:04:03.766764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.766979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.767194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.767212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.767225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.770255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.779527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.779974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.779999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.780029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.780243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.780440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.780459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.780471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.783450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.792912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.793351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.793385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.793414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.793608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.793836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.793856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.793869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.796838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.806229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.806630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.806653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.806667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.806905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.807123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.807142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.807155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.810127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.819588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.820056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.820081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.820100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.820330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.820527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.820546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.820558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.823523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.833128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.833567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.833605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.833618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.833851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.834065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.834083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.834095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.837085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.846241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.846693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.846717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.846731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.846977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.847205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.847223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.847235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.850593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.859485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.859927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.859966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.859981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.860204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.860396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.860418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.860431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.863435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.872866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.873327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.873374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.873388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.873590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.873823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.873852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.873865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.876849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.885990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.886457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.886494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.886508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.886710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.886931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.886952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.886964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.889968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.704 [2024-07-15 13:04:03.899138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.704 [2024-07-15 13:04:03.899609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.704 [2024-07-15 13:04:03.899660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.704 [2024-07-15 13:04:03.899674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.704 [2024-07-15 13:04:03.899905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.704 [2024-07-15 13:04:03.900117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.704 [2024-07-15 13:04:03.900135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.704 [2024-07-15 13:04:03.900148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.704 [2024-07-15 13:04:03.902995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.912542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.912975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.913038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.913053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.913278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.913470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.913487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.913499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.916546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.925654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.926100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.926138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.926151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.926353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.926545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.926563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.926575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.929428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.938703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.939136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.939186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.939200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.939403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.939594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.939612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.939623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.942402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.951673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.952163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.952210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.952224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.952431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.952622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.952640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.952652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.955547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.964872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.965341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.965378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.965393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.965581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.965802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.965825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.965837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.968645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.978054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.978457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.978481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.978494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.978696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.978919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.978939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.978952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.981842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:03.991080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:03.991596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:03.991646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:03.991665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:03.991944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:03.992221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:03.992247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:03.992273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:03.995318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:04.004362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:04.004817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:04.004843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:04.004874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:04.005083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:04.005275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:04.005293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:04.005305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:04.008200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:04.017433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:04.017816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:04.017854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:04.017867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:04.018070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:04.018262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:04.018280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:04.018291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:04.021227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:04.030527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:04.030960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:04.030984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:04.031015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:04.031220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:04.031411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.965 [2024-07-15 13:04:04.031430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.965 [2024-07-15 13:04:04.031442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.965 [2024-07-15 13:04:04.034298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.965 [2024-07-15 13:04:04.043583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.965 [2024-07-15 13:04:04.044049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.965 [2024-07-15 13:04:04.044086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.965 [2024-07-15 13:04:04.044101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.965 [2024-07-15 13:04:04.044289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.965 [2024-07-15 13:04:04.044480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.044498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.044510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.047404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.056708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.057193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.057232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.057246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.057434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.057625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.057643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.057655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.060513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.069834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.070272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.070309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.070324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.070512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.070704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.070721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.070734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.073629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.082956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.083377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.083400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.083414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.083616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.083842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.083862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.083875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.086701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.096014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.096435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.096458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.096486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.096713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.096947] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.096969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.096982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.100510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.109356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.109815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.109839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.109867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.110075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.110267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.110285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.110297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.113299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.122523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.122985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.123034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.123048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.123250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.123442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.123459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.123471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.126256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.135533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.135988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.136037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.136051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.136253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.136445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.136462] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.136475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.139252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.148521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.148917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.148965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.148979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.149180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.149372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.149390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.149402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.152179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:45.966 [2024-07-15 13:04:04.161605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:45.966 [2024-07-15 13:04:04.162064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.966 [2024-07-15 13:04:04.162114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:45.966 [2024-07-15 13:04:04.162127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:45.966 [2024-07-15 13:04:04.162329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:45.966 [2024-07-15 13:04:04.162520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:45.966 [2024-07-15 13:04:04.162538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:45.966 [2024-07-15 13:04:04.162551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:45.966 [2024-07-15 13:04:04.165366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.226 [2024-07-15 13:04:04.174777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.226 [2024-07-15 13:04:04.175235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.226 [2024-07-15 13:04:04.175258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.226 [2024-07-15 13:04:04.175292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.226 [2024-07-15 13:04:04.175481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.226 [2024-07-15 13:04:04.175690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.175708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.175721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.178632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.187938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.188403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.188456] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.188644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.188866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.188886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.188898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.191768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.201043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.201477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.201515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.201530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.201718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.201939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.201958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.201971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.204860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.214099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.214555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.214602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.214615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.214880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.215106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.215129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.215143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.218076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.227259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.227729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.227761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.227776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.227984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.228191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.228209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.228222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.231037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.240310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.240751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.240788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.240803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.240991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.241183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.241200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.241212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.244107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.253450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.253894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.253931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.253946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.254145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.254336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.254354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.254366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.257257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.266527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.266984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.267033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.267047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.267249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.267441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.267459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.267471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.270285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.279599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.279998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.280021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.280035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.280236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.280428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.280446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.280458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.283352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.292735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.293177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.293199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.293227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.293416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.293607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.293625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.293636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.296531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.305854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.306332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.306356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.306385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.306578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.227 [2024-07-15 13:04:04.306798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.227 [2024-07-15 13:04:04.306818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.227 [2024-07-15 13:04:04.306831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.227 [2024-07-15 13:04:04.309776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.227 [2024-07-15 13:04:04.318905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.227 [2024-07-15 13:04:04.319371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.227 [2024-07-15 13:04:04.319395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.227 [2024-07-15 13:04:04.319425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.227 [2024-07-15 13:04:04.319630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.319850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.319870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.319883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.322772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.332090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.332510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.332533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.332560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.332778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.332977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.332995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.333008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.335798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.345108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.345547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.345570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.345599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.345814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.346013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.346032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.346064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.348939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.358528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.359034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.359082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.359096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.359284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.359475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.359493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.359505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.362448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.371684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.372158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.372219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.372421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.372612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.372630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.372642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.375596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.384924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.385393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.385441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.385454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.385657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.385878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.385898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.385910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.388734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.398011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.398422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.398471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.398485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.398687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.398909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.398929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.398942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.401830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.411145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.411600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.411650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.411663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.411896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.412108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.412126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.412138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.414988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.228 [2024-07-15 13:04:04.424311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.228 [2024-07-15 13:04:04.424727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.228 [2024-07-15 13:04:04.424781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.228 [2024-07-15 13:04:04.424796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.228 [2024-07-15 13:04:04.425004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.228 [2024-07-15 13:04:04.425213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.228 [2024-07-15 13:04:04.425231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.228 [2024-07-15 13:04:04.425243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.228 [2024-07-15 13:04:04.428262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.437575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.438034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.438059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.488 [2024-07-15 13:04:04.438074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.488 [2024-07-15 13:04:04.438268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.488 [2024-07-15 13:04:04.438481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.488 [2024-07-15 13:04:04.438500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.488 [2024-07-15 13:04:04.438512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.488 [2024-07-15 13:04:04.441366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.450640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.451080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.451104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.488 [2024-07-15 13:04:04.451118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.488 [2024-07-15 13:04:04.451307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.488 [2024-07-15 13:04:04.451498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.488 [2024-07-15 13:04:04.451516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.488 [2024-07-15 13:04:04.451528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.488 [2024-07-15 13:04:04.454423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.463904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.464363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.464386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.488 [2024-07-15 13:04:04.464414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.488 [2024-07-15 13:04:04.464603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.488 [2024-07-15 13:04:04.464825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.488 [2024-07-15 13:04:04.464844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.488 [2024-07-15 13:04:04.464857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.488 [2024-07-15 13:04:04.467643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.476993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.477468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.477491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.488 [2024-07-15 13:04:04.477520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.488 [2024-07-15 13:04:04.477708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.488 [2024-07-15 13:04:04.477931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.488 [2024-07-15 13:04:04.477950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.488 [2024-07-15 13:04:04.477963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.488 [2024-07-15 13:04:04.480855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.489992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.490403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.490426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.488 [2024-07-15 13:04:04.490440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.488 [2024-07-15 13:04:04.490643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.488 [2024-07-15 13:04:04.490845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.488 [2024-07-15 13:04:04.490864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.488 [2024-07-15 13:04:04.490876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.488 [2024-07-15 13:04:04.493688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.488 [2024-07-15 13:04:04.503096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.488 [2024-07-15 13:04:04.503559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.488 [2024-07-15 13:04:04.503610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.503623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.503855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 [2024-07-15 13:04:04.504053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.504072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.504084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 [2024-07-15 13:04:04.506976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 [2024-07-15 13:04:04.516316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.516791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.516815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.516829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.517051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 [2024-07-15 13:04:04.517242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.517260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.517273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 [2024-07-15 13:04:04.520171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 [2024-07-15 13:04:04.529479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.529956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.530005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.530024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.530213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 [2024-07-15 13:04:04.530404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.530422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.530434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 [2024-07-15 13:04:04.533326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 [2024-07-15 13:04:04.542513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.542980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.543027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.543041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.543243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 [2024-07-15 13:04:04.543435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.543453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.543465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 [2024-07-15 13:04:04.546244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 [2024-07-15 13:04:04.555516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.555975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.555998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.556027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.556215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 [2024-07-15 13:04:04.556407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.556425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.556437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 [2024-07-15 13:04:04.559331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 [2024-07-15 13:04:04.568588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.568998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.569027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.569055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.489 [2024-07-15 13:04:04.569263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3494411 Killed "${NVMF_APP[@]}" "$@" 00:24:46.489 [2024-07-15 13:04:04.569481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.489 [2024-07-15 13:04:04.569500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.489 [2024-07-15 13:04:04.569528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.489 [2024-07-15 13:04:04.572817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3495377 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3495377 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3495377 ']' 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.489 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.489 [2024-07-15 13:04:04.581979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.489 [2024-07-15 13:04:04.582371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.489 [2024-07-15 13:04:04.582409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.489 [2024-07-15 13:04:04.582424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.582632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.582867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.582888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.582902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.586023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.595320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.595678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.595703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.595733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.595954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.596193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.596216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.596229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.599343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.608706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.609138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.609178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.609193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.609402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.609599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.609618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.609631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.612638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.619183] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:24:46.490 [2024-07-15 13:04:04.619255] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.490 [2024-07-15 13:04:04.622176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.622543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.622582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.622596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.622835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.623059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.623094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.623107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.626156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.635458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.635803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.635828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.635842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.636036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.636233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.636251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.636269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.639246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.648841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.649208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.649248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.649263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.649471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.649669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.649687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.649699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.652676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.490 [2024-07-15 13:04:04.662220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.662559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.662584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.662599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.662840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.663051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.490 [2024-07-15 13:04:04.663071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.490 [2024-07-15 13:04:04.663084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.490 [2024-07-15 13:04:04.666208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.490 [2024-07-15 13:04:04.675514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.490 [2024-07-15 13:04:04.675904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.490 [2024-07-15 13:04:04.675945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.490 [2024-07-15 13:04:04.675960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.490 [2024-07-15 13:04:04.676210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.490 [2024-07-15 13:04:04.676437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.491 [2024-07-15 13:04:04.676459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.491 [2024-07-15 13:04:04.676472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.491 [2024-07-15 13:04:04.679796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.491 [2024-07-15 13:04:04.685762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:46.491 [2024-07-15 13:04:04.688894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.491 [2024-07-15 13:04:04.689276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.491 [2024-07-15 13:04:04.689303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.491 [2024-07-15 13:04:04.689333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.491 [2024-07-15 13:04:04.689571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.491 [2024-07-15 13:04:04.689805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.491 [2024-07-15 13:04:04.689826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.491 [2024-07-15 13:04:04.689840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.491 [2024-07-15 13:04:04.693085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.702312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.702770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.702805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.702825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.703042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.703267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.703287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.703303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.706374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.715819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.716194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.716233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.716248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.716456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.716654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.716673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.716686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.719847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.729254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.729602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.729627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.729642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.729896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.730134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.730153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.730166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.733290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.742647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.743012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.743042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.743073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.743269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.743467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.743487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.743500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.746576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.756204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.757291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.757342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.757363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.757577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.757819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.757841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.757860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.761045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.769810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.770185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.770213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.770244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.770445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.770649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.770669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.770690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.773843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.783277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.783651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.783692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.783707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.783937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.784162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.784182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.784197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.787265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.796630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.797016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.797043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.797059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.797268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.797479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.797498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.797513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.797775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.750 [2024-07-15 13:04:04.797820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.750 [2024-07-15 13:04:04.797835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.750 [2024-07-15 13:04:04.797846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.750 [2024-07-15 13:04:04.797856] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.750 [2024-07-15 13:04:04.797920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.750 [2024-07-15 13:04:04.798022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.750 [2024-07-15 13:04:04.798026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.750 [2024-07-15 13:04:04.800698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.810297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.810814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.810851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.810873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.811119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.811335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.811356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.811373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.814624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.750 [2024-07-15 13:04:04.824112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.750 [2024-07-15 13:04:04.824576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.750 [2024-07-15 13:04:04.824613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.750 [2024-07-15 13:04:04.824633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.750 [2024-07-15 13:04:04.824883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.750 [2024-07-15 13:04:04.825121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.750 [2024-07-15 13:04:04.825142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.750 [2024-07-15 13:04:04.825161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.750 [2024-07-15 13:04:04.828365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.837647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.838186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.838223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.838243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.838465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.838692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.838712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.838755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.841973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.851236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.851724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.851769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.851789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.852010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.852239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.852260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.852296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.855585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.864858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.865346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.865397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.865417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.865648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.865897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.865919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.865937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.869178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.878355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.878829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.878861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.878879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.879099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.879330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.879351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.879367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.882559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.891834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.892252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.892293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.892309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.892515] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.892725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.892753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.892768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.896010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 [2024-07-15 13:04:04.905428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.905855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.905891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.905908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.906122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.906339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.906360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.906373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.909619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.751 [2024-07-15 13:04:04.918998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.919430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.919455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.919471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.919693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.919935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.919958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.919972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.923238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.751 [2024-07-15 13:04:04.932639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.933003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.933046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.933268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.933455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.751 [2024-07-15 13:04:04.933480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.933499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.933516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.936779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.751 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.751 [2024-07-15 13:04:04.946071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.751 [2024-07-15 13:04:04.946548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.751 [2024-07-15 13:04:04.946586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:46.751 [2024-07-15 13:04:04.946602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:46.751 [2024-07-15 13:04:04.946846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:46.751 [2024-07-15 13:04:04.947079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:46.751 [2024-07-15 13:04:04.947113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:46.751 [2024-07-15 13:04:04.947126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.751 [2024-07-15 13:04:04.950278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.009 [2024-07-15 13:04:04.959587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.009 [2024-07-15 13:04:04.960057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.009 [2024-07-15 13:04:04.960091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:47.009 [2024-07-15 13:04:04.960110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:47.009 [2024-07-15 13:04:04.960340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:47.009 [2024-07-15 13:04:04.960564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.009 [2024-07-15 13:04:04.960600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.009 [2024-07-15 13:04:04.960626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.009 [2024-07-15 13:04:04.963800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.009 Malloc0 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.009 [2024-07-15 13:04:04.973206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.009 [2024-07-15 13:04:04.973643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.009 [2024-07-15 13:04:04.973688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:47.009 [2024-07-15 13:04:04.973707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:47.009 [2024-07-15 13:04:04.973969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): Bad file descriptor 00:24:47.009 [2024-07-15 13:04:04.974210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.009 [2024-07-15 13:04:04.974231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.009 [2024-07-15 13:04:04.974248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.009 [2024-07-15 13:04:04.977411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.009 [2024-07-15 13:04:04.986574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.009 [2024-07-15 13:04:04.986966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:47.009 [2024-07-15 13:04:04.986993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb1540 with addr=10.0.0.2, port=4420 00:24:47.009 [2024-07-15 13:04:04.987008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb1540 is same with the state(5) to be set 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.009 [2024-07-15 13:04:04.987222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb1540 (9): B 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.009 ad file descriptor 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.009 [2024-07-15 13:04:04.987443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:47.009 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:47.009 [2024-07-15 13:04:04.987463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:47.009 [2024-07-15 13:04:04.987477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.009 [2024-07-15 13:04:04.990619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:47.010 [2024-07-15 13:04:04.991010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.010 13:04:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.010 13:04:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3494696 00:24:47.010 [2024-07-15 13:04:05.000135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.010 [2024-07-15 13:04:05.028348] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:57.025 00:24:57.025 Latency(us) 00:24:57.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.025 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:57.025 Verification LBA range: start 0x0 length 0x4000 00:24:57.025 Nvme1n1 : 15.01 6791.32 26.53 10157.36 0.00 7530.28 831.34 19903.53 00:24:57.025 =================================================================================================================== 00:24:57.025 Total : 6791.32 26.53 10157.36 0.00 7530.28 831.34 19903.53 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:57.025 rmmod nvme_tcp 00:24:57.025 rmmod nvme_fabrics 00:24:57.025 rmmod nvme_keyring 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3495377 ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3495377 ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3495377' 00:24:57.025 killing process with pid 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3495377 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.025 13:04:14 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.932 13:04:16 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.932 00:24:58.932 real 0m22.791s 00:24:58.932 user 1m1.121s 00:24:58.932 sys 0m4.448s 00:24:58.932 13:04:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.932 13:04:16 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:58.932 ************************************ 00:24:58.932 END TEST nvmf_bdevperf 00:24:58.932 ************************************ 00:24:58.932 13:04:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:58.932 13:04:16 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:58.932 13:04:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.932 13:04:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.932 13:04:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:58.932 ************************************ 00:24:58.932 START TEST nvmf_target_disconnect 00:24:58.932 ************************************ 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:58.932 * Looking for test storage... 00:24:58.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.932 13:04:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:58.933 13:04:16 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:00.853 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:00.853 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:00.853 Found net devices under 0000:84:00.0: cvl_0_0 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:00.853 Found net devices under 0000:84:00.1: cvl_0_1 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.853 13:04:18 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:00.853 00:25:00.853 --- 10.0.0.2 ping statistics --- 00:25:00.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.853 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:25:00.853 00:25:00.853 --- 10.0.0.1 ping statistics --- 00:25:00.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.853 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.853 13:04:19 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:01.114 ************************************ 00:25:01.114 START TEST nvmf_target_disconnect_tc1 00:25:01.114 ************************************ 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.114 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.114 [2024-07-15 13:04:19.178542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.114 [2024-07-15 13:04:19.178632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe65790 with addr=10.0.0.2, port=4420 00:25:01.114 [2024-07-15 13:04:19.178668] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:01.114 [2024-07-15 13:04:19.178697] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:01.114 [2024-07-15 13:04:19.178709] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:01.114 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:01.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:01.114 Initializing NVMe Controllers 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:01.114 00:25:01.114 real 0m0.089s 00:25:01.114 user 0m0.039s 00:25:01.114 sys 0m0.050s 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:01.114 ************************************ 00:25:01.114 END TEST nvmf_target_disconnect_tc1 00:25:01.114 ************************************ 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:01.114 ************************************ 00:25:01.114 START TEST nvmf_target_disconnect_tc2 00:25:01.114 ************************************ 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3498551 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3498551 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3498551 ']' 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.114 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.114 [2024-07-15 13:04:19.289396] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:25:01.114 [2024-07-15 13:04:19.289491] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.373 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.373 [2024-07-15 13:04:19.358654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.373 [2024-07-15 13:04:19.473368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.373 [2024-07-15 13:04:19.473435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.373 [2024-07-15 13:04:19.473474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.373 [2024-07-15 13:04:19.473485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.373 [2024-07-15 13:04:19.473495] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.373 [2024-07-15 13:04:19.473546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:01.373 [2024-07-15 13:04:19.473609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:01.373 [2024-07-15 13:04:19.473674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:01.373 [2024-07-15 13:04:19.473677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 Malloc0 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 [2024-07-15 13:04:19.658133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 [2024-07-15 13:04:19.686405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3498693 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:01.633 13:04:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:01.633 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.536 13:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3498551 00:25:03.536 13:04:21 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Write completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Write completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Write completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Write completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.536 Read completed with error (sct=0, sc=8) 00:25:03.536 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 [2024-07-15 13:04:21.711569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 [2024-07-15 13:04:21.711932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Write completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 Read completed with error (sct=0, sc=8) 00:25:03.537 starting I/O failed 00:25:03.537 [2024-07-15 13:04:21.712281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:03.537 [2024-07-15 13:04:21.712509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.712539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.712705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.712769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.712882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.712909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.713973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.713999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.714120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.714143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.714310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.714350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.714499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.714524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.714676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.714700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.714827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.537 [2024-07-15 13:04:21.714853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.537 qpair failed and we were unable to recover it. 00:25:03.537 [2024-07-15 13:04:21.715030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.715245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.715384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.715565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.715755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.715897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.715923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.716119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.716295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.716508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.716714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.716875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.716985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.717194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.717368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.717576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.717764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.717903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.717929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.718075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.718113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.718371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.718395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.718621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.718644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.718822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.718858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.718969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.718994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.719178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.719201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.719368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.719420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.719569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.719602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.719772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.719797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.719931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.719957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.720931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.720957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.721175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.721199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.721360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.721383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.721553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.721576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.721778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.721920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.721946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.722098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.722139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.722326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.722348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.538 [2024-07-15 13:04:21.722502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.538 [2024-07-15 13:04:21.722525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.538 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.722648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.722672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.722847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.722872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.722996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.723175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.723385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.723587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.723755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.723899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.723924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.724119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.724155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.724361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.724385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.724565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.724589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.724742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.724769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.724876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.724901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.725933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.725958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.726143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.726182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.726357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.726385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.726566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.726589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.726775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.726800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.726911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.726936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.727072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.727097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.727275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.727298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.727464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.727488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.727676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.727701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.727860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.727900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.728053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.728079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.728250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.728457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.728508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.728681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.728705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.728881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.728906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.729071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.729268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.729484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.729654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.729844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.729983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.539 [2024-07-15 13:04:21.730008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.539 qpair failed and we were unable to recover it. 00:25:03.539 [2024-07-15 13:04:21.730166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.730189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.730345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.730370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.730569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.730594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.730733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.730764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.730898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.730924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.731126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.731150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.731330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.731353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.731512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.731536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.731719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.731753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.731887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.731912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.732104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.732143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.732306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.732344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.732530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.732591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.732748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.732774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.732893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.732919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.733111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.733142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.733309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.733355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.733505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.733528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.733735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.733763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.733890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.733915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.734078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.734130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.734296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.734319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.734532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.734572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.734789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.734815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.734937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.734963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.735113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.735167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.735341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.735365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.735529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.735554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.735735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.735766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.735875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.735903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.736074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.736301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.736536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.736862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.736973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.737137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.737282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.737488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.737700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.540 qpair failed and we were unable to recover it. 00:25:03.540 [2024-07-15 13:04:21.737858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.540 [2024-07-15 13:04:21.737883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.738925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.738952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.739146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.739170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.739350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.739373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.739506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.739531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.739710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.739735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.739889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.739917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.740097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.740123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.541 [2024-07-15 13:04:21.740306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.541 [2024-07-15 13:04:21.740348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.541 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.740506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.740546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.740704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.740754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.740866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.740893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.741059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.741216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.741409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.741585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.741814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.741980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.742195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.742442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.742628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.742805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.742974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.742999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.743141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.743181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.743289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.743314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.743427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.743459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.743642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.743666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.743827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.743852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.824 [2024-07-15 13:04:21.744057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.824 [2024-07-15 13:04:21.744096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.824 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.744211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.744254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.744368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.744408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.744558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.744583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.744778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.744831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.745043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.745084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.745279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.745305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.745489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.745513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.745703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.745727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.745924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.745949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.746095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.746120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.746284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.746308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.746527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.746570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.746699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.746722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.746867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.746892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.747938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.747964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.748128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.748167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.748325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.748348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.748492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.748516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.748680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.748721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.748870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.748896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.749932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.749975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.750172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.750196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.750357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.750381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.750545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.750569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.750763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.750795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.750917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.750958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.751091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.751131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.825 qpair failed and we were unable to recover it. 00:25:03.825 [2024-07-15 13:04:21.751270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.825 [2024-07-15 13:04:21.751300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.751499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.751522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.751708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.751731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.751905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.751934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.752129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.752184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.752333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.752419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.752585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.752608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.752808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.752833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.752999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.753210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.753415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.753651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.753807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.753947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.753990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.754194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.754237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.754400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.754443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.754632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.754655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.754813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.754857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.754994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.755024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.755188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.755219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.755407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.755449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.755622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.755646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.755853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.755896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.756055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.756098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.756263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.756306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.756506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.756529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.756649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.756687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.756846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.756891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.757071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.757110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.757292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.757333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.757511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.757534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.757680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.757718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.757885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.757910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.758066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.758090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.758273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.758324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.758474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.758512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.758653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.758677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.758817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.826 [2024-07-15 13:04:21.758860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.826 qpair failed and we were unable to recover it. 00:25:03.826 [2024-07-15 13:04:21.759057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.759098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.759258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.759282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.759497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.759545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.759729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.759971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.760014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.760152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.760205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.760352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.760414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.760526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.760550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.760750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.760777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.760964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.761012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.761173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.761218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.761388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.761446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.761609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.761633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.761795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.761821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.761989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.762012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.762152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.762193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.762397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.762439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.762591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.762614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.762778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.762817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.762956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.763215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.763388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.763563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.763729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.763959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.763983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.764125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.764148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.764346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.764369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.764526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.764549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.764663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.764686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.764852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.764878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.765075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.765114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.765315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.765364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.765527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.765550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.765701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.765748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.765910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.765957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.827 [2024-07-15 13:04:21.766091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.827 [2024-07-15 13:04:21.766115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.827 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.766248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.766272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.766427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.766451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.766629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.766652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.766793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.766850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.767062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.767282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.767682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.767843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.767986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.768163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.768417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.768624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.768810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.768967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.768992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.769147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.769170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.769359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.769382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.769545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.769583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.769788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.769813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.769939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.769987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.770137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.770159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.770342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.770366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.770491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.770529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.770679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.770717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.770900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.770948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.771158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.771207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.771368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.771391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.771588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.771611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.771774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.771813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.771955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.772007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.772136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.772186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.828 [2024-07-15 13:04:21.772297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.828 [2024-07-15 13:04:21.772346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.828 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.772520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.772543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.772689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.772726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.772872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.772898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.773055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.773080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.773257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.773307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.773518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.773542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.773750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.773775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.773924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.773947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.774125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.774174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.774316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.774366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.774545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.774568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.774733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.774762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.774901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.774957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.775136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.775193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.775327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.775385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.775562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.775585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.775752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.775777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.775901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.775925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.776045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.776091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.776280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.776304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.776498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.776521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.776703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.776748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.776891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.776941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.777115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.777163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.777327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.777397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.777542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.777566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.777705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.777729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.777863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.777888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.778033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.778056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.778245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.778272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.778391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.778415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.829 [2024-07-15 13:04:21.778550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.829 [2024-07-15 13:04:21.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.829 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.778752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.778777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.778885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.778924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.779032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.779056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.779241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.779264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.779422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.779445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.779625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.779648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.779831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.779882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.780043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.780092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.780249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.780272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.780452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.780500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.780651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.780674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.780890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.780942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.781109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.781166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.781335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.781388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.781552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.781576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.781815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.781840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.781980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.782028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.782191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.782244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.782447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.782495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.782684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.782707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.782884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.782936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.783099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.783153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.783326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.783376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.783551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.783574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.783743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.783767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.783909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.783960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.784132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.784186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.784351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.784374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.784501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.784539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.784692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.784730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.784859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.784884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.785008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.785049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.785244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.785267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.785453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.785663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.785686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.785828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.785879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.786053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.786103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.786288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.786337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.786498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.786521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.786710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.830 [2024-07-15 13:04:21.786733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.830 qpair failed and we were unable to recover it. 00:25:03.830 [2024-07-15 13:04:21.786914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.786968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.787139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.787192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.787373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.787396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.787556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.787579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.787753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.787797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.787939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.787988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.788173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.788211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.788362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.788385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.788576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.788600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.788750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.788774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.788913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.788963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.789942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.789967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.790154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.790176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.790374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.790398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.790588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.790611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.790781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.790819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.790961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.791017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.791211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.791254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.791409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.791439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.791592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.791629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.791783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.791844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.791978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.792217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.792382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.792579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.792777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.792964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.792989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.793157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.793215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.793323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.793347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.793481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.793505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.793685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.793708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.793842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.793894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.794006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.794030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.794184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.794222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.794409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.794447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.831 [2024-07-15 13:04:21.794633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.831 [2024-07-15 13:04:21.794656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.831 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.794842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.794898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.795077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.795126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.795305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.795354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.795520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.795543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.795712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.795735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.795904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.795953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.796139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.796348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.796525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.796714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.796884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.796988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.797205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.797377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.797555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.797763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.797918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.797966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.798928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.798967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.799168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.799346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.799500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.799701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.799858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.799996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.800022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.800186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.800225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.800411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.800445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.800628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.800652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.800827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.800890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.801019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.801071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.801236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.801286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.801492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.801515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.801695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.801719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.801856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.801918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.802080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.802127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.802339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.802389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.832 qpair failed and we were unable to recover it. 00:25:03.832 [2024-07-15 13:04:21.802562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.832 [2024-07-15 13:04:21.802585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.802795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.802945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.803144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.803201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.803371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.803394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.803610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.803633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.803777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.803804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.803927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.803975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.804133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.804156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.804310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.804357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.804518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.804541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.804690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.804728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.804868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.804920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.805055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.805093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.805259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.805308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.805493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.805516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.805679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.805702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.805860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.805911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.806064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.806103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.806282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.806305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.806486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.806508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.806621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.806659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.806827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.806890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.807048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.807102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.807283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.807332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.807484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.807507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.807663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.807701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.807884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.807934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.808078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.808124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.808327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.808377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.808578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.808600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.808777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.808932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.808984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.809164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.809214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.809365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.833 [2024-07-15 13:04:21.809413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.833 qpair failed and we were unable to recover it. 00:25:03.833 [2024-07-15 13:04:21.809592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.809615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.809752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.809798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.809926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.809979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.810139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.810189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.810356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.810405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.810594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.810617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.810793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.810831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.810968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.810992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.811141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.811165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.811312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.811335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.811532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.811730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.811775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.811908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.811959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.812094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.812150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.812258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.812321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.812468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.812491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.812654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.812678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.812882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.812936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.813055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.813116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.813284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.813333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.813530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.813553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.813719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.813748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.813913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.813970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.814156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.814180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.814374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.814421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.814592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.814615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.814816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.814866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.815032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.815082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.815260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.815284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.815455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.815505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.815667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.815690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.815902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.815951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.816159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.816209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.816382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.816431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.816555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.816592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.816759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.816798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.816947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.816998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.817148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.817199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.817360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.817407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.834 qpair failed and we were unable to recover it. 00:25:03.834 [2024-07-15 13:04:21.817587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.834 [2024-07-15 13:04:21.817610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.817747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.817773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.817886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.817936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.818113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.818162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.818339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.818387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.818559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.818582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.818746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.818771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.818923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.818948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.819110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.819133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.819279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.819303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.819490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.819513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.819662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.819685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.819855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.819903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.820068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.820116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.820297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.820346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.820498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.820522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.820685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.820723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.820886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.820938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.821080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.821128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.821288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.821312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.821491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.821514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.821679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.821702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.821846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.821885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.822059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.822108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.822318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.822367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.822570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.822593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.822705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.822751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.822929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.822976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.823193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.823216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.823379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.823428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.835 qpair failed and we were unable to recover it. 00:25:03.835 [2024-07-15 13:04:21.823604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.835 [2024-07-15 13:04:21.823627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.823820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.823872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.824015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.824062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.824239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.824262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.824432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.824481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.824634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.824658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.824805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.824864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.825048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.825086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.825218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.825270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.825457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.825480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.825650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.825672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.825831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.825878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.826068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.826115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.826254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.826483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.826506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.826677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.826701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.826841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.826898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.827068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.827123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.827293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.827341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.827493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.827516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.827706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.827730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.827881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.827906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.828099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.828122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.828289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.828312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.828502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.828525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.828659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.828682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.828856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.828881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.829045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.829103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.829245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.829299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.829471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.829520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.829670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.836 [2024-07-15 13:04:21.829693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.836 qpair failed and we were unable to recover it. 00:25:03.836 [2024-07-15 13:04:21.829834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.829859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.830043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.830093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.830295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.830318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.830488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.830512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.830697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.830734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.830889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.830943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.831123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.831146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.831335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.831358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.831511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.831534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.831717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.831771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.831909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.831960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.832132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.832180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.832320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.832344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.832485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.832509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.832649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.832673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.832836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.832861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.833910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.833935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.834058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.834211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.834409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.834638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.834809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.834963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.835175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.835361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.835553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.835763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.835907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.835955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.837 qpair failed and we were unable to recover it. 00:25:03.837 [2024-07-15 13:04:21.836090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.837 [2024-07-15 13:04:21.836143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.836319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.836367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.836551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.836573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.836760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.836798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.836900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.836966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.837175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.837218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.837359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.837408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.837607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.837630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.837749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.837773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.837918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.837973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.838149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.838316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.838489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.838680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.838839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.838987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.839031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.839200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.839223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.839384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.839423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.839549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.839588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.839792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.839816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.839973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.840199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.840376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.840595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.840763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.840891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.840915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.838 qpair failed and we were unable to recover it. 00:25:03.838 [2024-07-15 13:04:21.841087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.838 [2024-07-15 13:04:21.841124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.841304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.841350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.841527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.841550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.841709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.841732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.841884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.841923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.842086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.842110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.842254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.842292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.842422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.842446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.842624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.842662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.842824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.842879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.843059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.843113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.843294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.843344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.843541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.843564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.843786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.843811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.843963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.844127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.844371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.844556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.844748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.844941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.844989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.845098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.845151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.845340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.845385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.845534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.845557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.845751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.845775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.845903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.845927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.846131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.846154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.846335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.846359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.846540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.846564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.846705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.846728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.846874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.846914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.847035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.847060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.847246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.847269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.847438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.847491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.839 qpair failed and we were unable to recover it. 00:25:03.839 [2024-07-15 13:04:21.847641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.839 [2024-07-15 13:04:21.847665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.847842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.847903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.848069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.848093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.848294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.848345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.848506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.848530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.848646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.848670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.848819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.848867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.849873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.849980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.850005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.850160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.850198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.850323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.850347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.850556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.850580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.850793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.850818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.850981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.851132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.851277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.851514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.851698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.851882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.851922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.852047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.852071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.852245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.852273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.852458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.852481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.852665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.852688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.852880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.852932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.853084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.853130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.853338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.853386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.853545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.853568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.853760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.853793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.840 [2024-07-15 13:04:21.853915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.840 [2024-07-15 13:04:21.853968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.840 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.854126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.854173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.854337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.854386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.854585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.854608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.854807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.854864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.854993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.855210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.855399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.855597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.855814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.855957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.855995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.856167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.856191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.856370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.856393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.856572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.856595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.856762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.856786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.856910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.856963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.857128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.857197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.857401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.857425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.857609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.857632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.857821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.857878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.858045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.858094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.858234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.858285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.858468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.858491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.858623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.858661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.858855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.858914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.859091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.859116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.859300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.859351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.859499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.859537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.859788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.859812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.859935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.859987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.860175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.860224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.860359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.841 [2024-07-15 13:04:21.860414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.841 qpair failed and we were unable to recover it. 00:25:03.841 [2024-07-15 13:04:21.860604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.860630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.860803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.860826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.860967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.861012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.861131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.861154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.861346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.861383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.861559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.861582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.861769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.861814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.861977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.862027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.862211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.862259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.862428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.862451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.862638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.862661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.862879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.862927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.863070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.863117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.863293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.863340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.863598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.863621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.863786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.863810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.863931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.863987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.864185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.864232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.864406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.864455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.864677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.864700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.864868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.864893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.865079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.865129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.865330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.865377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.865531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.865554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.865702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.865725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.865878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.865927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.866122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.842 [2024-07-15 13:04:21.866179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.842 qpair failed and we were unable to recover it. 00:25:03.842 [2024-07-15 13:04:21.866360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.866409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.866597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.866794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.866820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.866944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.866983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.867125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.867149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.867340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.867362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.867518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.867555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.867726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.867755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.867928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.867976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.868149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.868172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.868365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.868413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.868668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.868692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.868838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.868896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.869098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.869148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.869332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.869380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.869544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.869567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.869747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.869772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.869930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.869980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.870132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.870179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.870349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.843 [2024-07-15 13:04:21.870397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.843 qpair failed and we were unable to recover it. 00:25:03.843 [2024-07-15 13:04:21.870591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.870614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.870805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.870864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.871063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.871109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.871279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.871328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.871539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.871587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.871752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.871808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.871952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.872170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.872345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.872584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.872777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.872932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.872969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.873120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.873352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.844 [2024-07-15 13:04:21.873376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.844 qpair failed and we were unable to recover it. 00:25:03.844 [2024-07-15 13:04:21.873560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.873582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.873754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.873807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.873966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.874016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.874224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.874272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.874449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.874498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.874744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.874769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.874927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.874975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.875128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.845 [2024-07-15 13:04:21.875183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.845 qpair failed and we were unable to recover it. 00:25:03.845 [2024-07-15 13:04:21.875383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.875430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.875570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.875593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.875765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.875804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.875978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.876032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.876175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.876223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.876421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.876468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.876658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.876682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.876870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.876922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.877122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.877172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.877298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.877321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.877569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.877608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.877839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.877892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.846 qpair failed and we were unable to recover it. 00:25:03.846 [2024-07-15 13:04:21.878104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.846 [2024-07-15 13:04:21.878151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.878295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.878344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.878552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.878575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.878793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.878817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.878970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.879016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.879149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.879203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.879369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.879413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.879586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.879609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.879804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.879828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.879994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.880045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.880281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.880304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.880482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.880505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.880702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.880746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.880879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.880930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.847 [2024-07-15 13:04:21.881126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.847 [2024-07-15 13:04:21.881174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.847 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.881342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.881390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.881544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.881567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.881766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.881790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.881973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.882020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.882252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.882298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.882436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.882489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.882670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.882693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.882854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.882906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.883090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.883302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.883351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.883553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.883576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.883763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.883805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.883947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.883994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.884179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.884227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.884393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.884441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.884643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.884666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.884867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.884919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.885091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.885140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.885332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.885381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.885527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.885550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.885754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.885791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.885918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.885970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.886107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.886161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.886307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.886359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.886514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.886556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.886747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.886772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.886924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.886949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.887131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.887399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.887445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.887648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.887671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.887892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.887917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.888043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.888092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.888234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.888284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.888451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.888498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.848 qpair failed and we were unable to recover it. 00:25:03.848 [2024-07-15 13:04:21.888652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.848 [2024-07-15 13:04:21.888675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.888811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.888836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.889033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.889081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.889289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.889336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.889493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.889516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.889657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.889696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.889875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.889927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.890114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.890160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.890361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.890411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.890549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.890572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.890742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.890767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.890930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.891263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.891310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.891475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.891522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.891719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.891754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.891955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.892002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.892136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.892183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.892395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.892444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.892656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.892678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.892942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.892967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.893151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.893200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.893398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.893446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.893640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.893663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.893887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.893911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.894090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.894140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.894394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.894442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.894616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.894639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.894903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.894954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.895140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.895188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.895346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.895394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.895579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.895607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.895727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.895770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.895982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.896032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.896194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.896241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.896453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.896501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.896693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.896716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.896933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.896983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.897139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.897187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.897395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.897443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.897596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.897619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.897822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.897870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.898120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.898167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.898331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.898389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.898576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.898613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.898855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.898904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.899091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.899114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.899341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.899364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.899564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.899587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.899792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.899816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.900086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.900134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.900320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.900367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.849 [2024-07-15 13:04:21.900523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.849 [2024-07-15 13:04:21.900546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.849 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.900709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.900732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.900955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.901007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.901145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.901208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.901405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.901456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.901610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.901642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.901886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.901935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.902121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.902162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.902335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.902382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.902491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.902529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.902704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.902749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.902938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.902985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.903181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.903229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.903392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.903440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.903602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.903625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.903839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.903887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.904078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.904126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.904329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.904375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.904592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.904616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.904747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.904774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.904991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.905039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.905220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.905268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.905507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.905556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.905725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.905755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.905910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.905959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.906089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.850 [2024-07-15 13:04:21.906113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.850 qpair failed and we were unable to recover it. 00:25:03.850 [2024-07-15 13:04:21.906283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.906339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.906433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.906456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.906642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.906681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.906904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.906952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.907171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.907220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.907461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.907509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.907676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.907699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.907873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.907922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.908140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.908187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.908446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.908493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.908663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.908686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.908902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.908926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.851 qpair failed and we were unable to recover it. 00:25:03.851 [2024-07-15 13:04:21.909152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.851 [2024-07-15 13:04:21.909203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.909333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.909382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.909562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.909585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.909790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.909814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.910039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.910087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.910366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.910415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.910581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.852 [2024-07-15 13:04:21.910604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.852 qpair failed and we were unable to recover it. 00:25:03.852 [2024-07-15 13:04:21.910838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.910895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.911179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.911229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.911422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.911469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.911665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.911688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.911875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.911929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.912161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.912210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.912361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.912408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.912599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.853 [2024-07-15 13:04:21.912622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.853 qpair failed and we were unable to recover it. 00:25:03.853 [2024-07-15 13:04:21.912795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.854 [2024-07-15 13:04:21.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.854 qpair failed and we were unable to recover it. 00:25:03.854 [2024-07-15 13:04:21.913052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.854 [2024-07-15 13:04:21.913100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.913260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.913307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.913470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.913495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.913645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.913683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.913914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.913965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.914137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.914191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.855 [2024-07-15 13:04:21.914389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.855 [2024-07-15 13:04:21.914436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.855 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.914558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.914581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.914728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.914785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.914944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.914968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.915169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.915216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.915451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.915474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.915627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.915650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.915848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.915892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.916088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.916138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.916337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.916384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.916613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.916636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.916861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.916911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.917190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.917239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.917437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.856 [2024-07-15 13:04:21.917484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.856 qpair failed and we were unable to recover it. 00:25:03.856 [2024-07-15 13:04:21.917676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.917699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.917880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.918099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.918147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.918350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.918394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.918600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.918623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.918866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.918913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.919117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.919166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.919355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.919405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.919624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.857 [2024-07-15 13:04:21.919646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.857 qpair failed and we were unable to recover it. 00:25:03.857 [2024-07-15 13:04:21.919860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.919903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.920136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.920183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.920407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.920457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.920685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.920712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.920952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.921003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.921235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.921282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.921507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.921557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.921806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.921868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.922087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.922133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.922383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.922432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.922663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.922712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.922871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.922897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.923097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.923146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.923371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.923419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.923591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.923614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.923865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.923915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.924049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.924103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.924331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.858 [2024-07-15 13:04:21.924378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.858 qpair failed and we were unable to recover it. 00:25:03.858 [2024-07-15 13:04:21.924593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.924616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.924816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.924874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.925078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.925125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.925464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.925515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.925718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.925762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.925992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.926017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.926201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.926253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.926466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.926516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.926746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.926770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.926986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.927214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.927398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.927599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.927773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.927941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.927991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.928232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.928280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.928488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.928537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.928790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.859 [2024-07-15 13:04:21.928813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.859 qpair failed and we were unable to recover it. 00:25:03.859 [2024-07-15 13:04:21.929046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.929094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.929257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.929315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.929513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.929558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.929731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.929765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.930038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.930061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.930270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.930318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.930559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.930606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.930836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.930865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.931047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.931096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.931258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.931304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.931562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.931610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.931780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.931860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.932109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.932157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.932351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.932400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.932649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.932672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.932903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.932950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.933217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.933265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.933452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.933501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.933693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.933716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.933968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.933993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.934240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.934286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.934461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.934509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.934742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.934767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.934973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.934997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.935267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.935320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.935520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.935567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.935727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.935781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.935942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.935965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.936116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.936162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.936343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.936390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.936593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.936642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.936768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.936794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.937003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.937061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.937227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.937274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.937436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.937483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.937646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.937669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.937829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.937878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.938041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.938088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.938252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.938303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.938501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.938704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.938747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.938903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.938953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.939112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.939165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.939370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.939419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.939584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.939607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.939813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.939869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.940923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.940946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.941130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.941154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.860 qpair failed and we were unable to recover it. 00:25:03.860 [2024-07-15 13:04:21.941304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.860 [2024-07-15 13:04:21.941328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.941439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.941463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.941627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.941676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.941850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.941889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.942038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.942083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.942241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.942295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.942452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.942476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.942635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.942674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.942856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.942883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.943035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.943093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.943263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.943312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.943456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.943479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.943633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.943657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.943826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.943880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.944045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.944095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.944272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.944315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.944464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.944487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.944698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.944744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.944913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.944961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.945113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.945166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.945323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.945371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.945540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.945564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.945779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.945819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.945967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.946015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.946170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.946217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.946374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.946421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.946610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.946633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.946803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.946867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.947066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.947117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.947335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.947383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.947591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.947614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.947816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.947876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.948067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.948113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.948252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.948305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.948520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.948561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.948753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.948781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.948944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.948993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.949191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.949241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.949496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.949543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.949718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.949763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.949927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.949950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.950125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.950175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.950438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.950486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.950659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.950682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.950860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.950886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.951071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.951121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.951388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.951437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.951610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.951633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.951829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.951879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.952016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.952069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.952276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.861 [2024-07-15 13:04:21.952325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.861 qpair failed and we were unable to recover it. 00:25:03.861 [2024-07-15 13:04:21.952571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.952619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.952820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.952844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.953066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.953253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.953442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.953634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.953828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.953985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.954128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.954368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.954558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.954773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.954930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.954955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.955090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.955114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.955256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.955313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.955566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.955823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.955871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.956036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.956083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.956278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.956301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.956539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.956587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.956693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.956733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.956877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.956938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.957178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.957227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.957402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.957455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.957646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.957669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.957843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.957871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.958032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.958081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.958290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.958340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.958553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.958576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.958797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.958822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.958951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.959002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.959135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.959192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.959421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.959468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.959597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.959828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.959873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.862 qpair failed and we were unable to recover it. 00:25:03.862 [2024-07-15 13:04:21.960083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.862 [2024-07-15 13:04:21.960106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.960300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.960323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.960520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.960543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.960727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.960772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.960970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.961017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.961211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.961252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.961494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.961543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.961745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.961794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.961975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.962024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.962224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.962268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.962479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.962528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.962679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.962702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.962857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.962908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.963075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.963130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.963298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.963347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.963565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.963618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.963834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.963881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.964038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.964062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.964233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.964258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.964412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.964446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.964657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.964683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.964868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.964925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.965108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.965158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.965377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.965426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.965569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.965593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.965749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.965775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.965948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.966013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.966242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.966292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.966445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.966506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.966755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.966781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.966920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.966969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.967200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.967247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.967492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.967542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.967709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.967733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.967901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.967926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.968072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.968131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.968301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.968351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.968493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.968543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.968676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.968701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.968846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.968872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.969036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.969062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.969253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.969278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.969516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.969542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.969746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.969773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.969881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.969907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.970057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.970114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.970344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.970393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.970572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.970622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.970744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.970770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.970952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.971003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.971230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.971278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.971459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.971514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.971703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.971762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.971888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.863 [2024-07-15 13:04:21.971941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.863 qpair failed and we were unable to recover it. 00:25:03.863 [2024-07-15 13:04:21.972113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.972169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.972381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.972437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.972603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.972643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.972875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.972926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.973128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.973183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.973407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.973457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.973708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.973734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.973883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.973909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.974115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.974172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.974451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.974500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.974665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.974811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.974837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.975045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.975097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.975329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.975380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.975629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.975676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.975849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.976017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.976068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.976259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.976307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.976501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.976549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.976729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.976762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.976895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.976921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.977079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.977376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.977425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.977645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.977671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.977808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.977835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.977981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.978038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.978228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.978279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.978525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.978579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.978743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.978769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.978891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.978916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.979108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.979167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.979330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.979373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.979581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.979631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.979843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.979893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.980046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.980098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.980286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.980333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.980554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.980604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.980751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.980776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.980930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.980956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.981146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.981172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.981325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.981350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.981519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.981545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.981674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.981699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.981900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.981951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.982146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.982196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.982393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.982445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.982640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.982665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.982902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.982954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.983142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.983195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.983334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.983385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.983621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.983652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.983871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.983919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.984168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.984218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.984411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.984463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.984652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.984682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.984893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.984944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.985156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.985206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.985417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.985468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.864 qpair failed and we were unable to recover it. 00:25:03.864 [2024-07-15 13:04:21.985702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.864 [2024-07-15 13:04:21.985727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.985836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.985862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.986064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.986122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.986313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.986363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.986537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.986585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.986821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.986847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.987008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.987063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.987192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.987249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.987469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.987519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.987813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.987840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.988016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.988065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.988250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.988301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.988485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.988535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.988720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.988752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.988888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.988914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.989117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.989143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.989280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.989331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.989569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.989623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.989891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.989938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.990184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.990233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.990389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.990438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.990553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.990578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.990823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.990849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.991038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.991086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.991237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.991286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.991464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.991514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.991617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.991643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.991833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.991885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.992113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.992162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.992346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.992399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.992576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.992602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.992775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.993074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.993124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.993368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.993418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.993666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.993691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.993880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.993929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.994137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.994194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.994377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.994426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.994603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.994628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.994822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.994880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.865 [2024-07-15 13:04:21.995044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.865 [2024-07-15 13:04:21.995096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.865 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.995249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.995318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.995424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.995449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.995650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.995676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.995833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.995896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.996076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.996130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.996311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.996363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.996560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.996587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.996771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.996822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.996995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.997060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.997237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.997288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.997504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.997558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.997759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.997786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.998006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.998060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.998279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.998324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.998470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.998523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.998664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.998690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.998883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.998935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.999130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.999181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.999390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.999440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.999593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.999619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:21.999797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:21.999859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.000005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.000065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.000237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.000294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.000480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.000529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.000647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.000674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.000842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.000907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.001117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.001178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.001367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.001418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.001557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.001584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.001829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.001879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.002068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.002285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.002437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.002616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.002839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.002992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.003186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.003393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.003573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.003753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.003946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.003994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.004130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.004193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.004358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.004411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.004556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.004583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.004728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.004771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.004957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.005011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.005182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.005231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.005369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:03.866 [2024-07-15 13:04:22.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:03.866 qpair failed and we were unable to recover it. 00:25:03.866 [2024-07-15 13:04:22.005568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.005594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.005747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.005785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.005940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.005999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.006179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.006225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.006382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.006432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.006600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.006627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.006731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.006765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.006884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.006912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.007886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.007913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.008972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.008999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.009154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.009218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.009461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.009526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.009754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.009819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.009929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.009956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.010122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.010148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.010305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.010331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.010491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.010545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.010713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.010746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.010920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.010946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.011107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.141 [2024-07-15 13:04:22.011158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.141 qpair failed and we were unable to recover it. 00:25:04.141 [2024-07-15 13:04:22.011277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.011331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.011485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.011541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.011682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.011708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.011846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.011873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.012853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.012880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.013925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.013952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.014845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.014872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.015861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.015993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.016187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.016373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.016560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.016685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.016860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.016911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.017933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.142 [2024-07-15 13:04:22.017986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.142 qpair failed and we were unable to recover it. 00:25:04.142 [2024-07-15 13:04:22.018109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.018135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.018257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.018283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.018416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.018442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.018601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.018627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.018762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.018788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.018940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.019910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.019936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.020923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.020977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.021085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.021152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.021309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.021334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.021501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.021526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.021660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.021686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.021865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.021917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.022096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.022151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.022305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.022355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.022493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.022534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.022664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.022690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.022864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.022916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.023039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.023125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.023252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.023278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.023451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.023478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.023629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.023654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.023828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.024912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.024938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.025072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.143 qpair failed and we were unable to recover it. 00:25:04.143 [2024-07-15 13:04:22.025224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.143 [2024-07-15 13:04:22.025250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.025425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.025451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.025583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.025625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.025782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.025808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.025960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.025986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.026136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.026193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.026324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.026365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.026488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.026514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.026622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.026649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.026782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.026810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Write completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 Read completed with error (sct=0, sc=8) 00:25:04.144 starting I/O failed 00:25:04.144 [2024-07-15 13:04:22.027141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:04.144 [2024-07-15 13:04:22.027308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.027348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.027463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.027491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.027634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.027659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.027841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.027868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.027966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.027992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.028154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.028351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.028533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.028687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.028851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.028990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.029016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.029205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.029257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.029476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.029529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.029694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.029770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.029951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.029977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.030149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.030205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.030403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.030475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.030691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.030752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.030915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.030940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.031072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.031098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.144 qpair failed and we were unable to recover it. 00:25:04.144 [2024-07-15 13:04:22.031259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.144 [2024-07-15 13:04:22.031321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.031541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.031594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.031798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.031824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.031960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.031985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.032146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.032197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.032379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.032431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.032646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.032696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.032863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.032889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.033021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.033047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.033236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.033290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.033480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.033534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.033772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.033819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.033957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.033983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.034128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.034179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.034342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.034393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.034609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.034670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.034884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.034910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.035076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.035128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.035345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.035396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.035589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.035641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.035850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.035877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.036044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.036069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.036294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.036345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.036569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.036619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.036810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.036836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.036964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.036990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.037154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.037205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.037354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.037418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.037628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.037679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.037864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.037891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.038036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.038061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.038272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.038298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.038416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.038472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.038620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.145 [2024-07-15 13:04:22.038668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.145 qpair failed and we were unable to recover it. 00:25:04.145 [2024-07-15 13:04:22.038873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.038900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.039029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.039054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.039188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.039243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.039454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.039509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.039697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.039765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.039946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.039972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.040120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.040168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.040348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.040395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.040572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.040620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.040787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.040814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.040946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.040972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.041110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.041164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.041379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.041433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.041660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.041724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.041907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.041933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.042075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.042129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.042319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.042373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.042614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.042662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.042880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.042929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.043115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.043162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.043305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.043360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.043535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.043583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.043765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.043814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.043996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.044045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.044252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.044306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.044544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.044598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.044827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.044877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.045110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.045157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.045368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.045416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.045663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.045711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.045909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.045957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.046136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.046185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.046385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.046456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.046719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.046812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.047010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.047077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.047329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.047376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.047583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.047654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.047871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.047927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.048104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.048159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.048409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.048464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.146 [2024-07-15 13:04:22.048804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.146 [2024-07-15 13:04:22.048877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.146 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.049110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.049165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.049391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.049445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.049639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.049693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.049963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.050015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.050228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.050279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.050463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.050514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.050752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.050803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.051019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.051070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.051276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.051346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.051611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.051665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.051848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.051903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.052152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.052207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.052486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.052546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.052822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.052875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.053148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.053202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.053455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.053510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.053767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.053831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.054023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.054077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.054300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.054355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.054615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.054670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.054883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.054946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.055177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.055231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.055443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.055498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.055770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.055825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.056031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.056086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.056343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.056397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.056572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.056626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.056819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.056875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.057094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.057148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.057363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.057417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.057663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.057718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.057962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.058018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.058286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.058340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.058603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.058657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.058942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.058998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.059216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.059272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.059503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.059557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.059768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.059824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.060096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.060151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.060399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.060453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.060685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.147 [2024-07-15 13:04:22.060754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.147 qpair failed and we were unable to recover it. 00:25:04.147 [2024-07-15 13:04:22.061017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.061314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.061369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.061621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.061676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.061986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.062051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.062272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.062328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.062609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.062673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.062966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.063040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.063268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.063322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.063532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.063587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.063800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.063856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.064147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.064202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.064478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.064536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.064831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.064890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.065146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.065205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.065452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.065506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.065792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.065852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.066094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.066153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.066433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.066491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.066719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.066815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.067016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.067071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.067304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.067359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.067681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.067736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.068016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.068070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.068318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.068373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.068656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.068716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.068974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.069043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.069285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.069343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.069591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.069645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.069882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.070219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.070278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.070512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.070571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.070821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.070882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.071164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.071223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.071497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.071565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.071841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.071900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.072226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.072284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.072555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.072614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.072857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.072917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.073188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.073247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.073487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.073546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.073823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.148 [2024-07-15 13:04:22.073883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.148 qpair failed and we were unable to recover it. 00:25:04.148 [2024-07-15 13:04:22.074176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.074235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.074455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.074513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.074783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.074843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.075070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.075128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.075374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.075432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.075691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.075889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.076170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.076234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.076541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.076599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.076845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.076905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.077164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.077223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.077440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.077498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.077757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.077830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.078027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.078085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.078290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.078349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.078579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.078637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.078880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.078941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.079187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.079250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.079513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.079572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.079824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.079885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.080108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.080167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.080418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.080476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.080714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.080798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.080978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.081036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.081280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.081339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.081609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.081667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.081921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.081982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.082220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.082280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.082581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.082644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.082886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.082951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.083218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.083282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.083546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.083624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.083873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.083933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.084192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.084256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.084533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.084598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.084811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.084876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.085139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.149 [2024-07-15 13:04:22.085197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.149 qpair failed and we were unable to recover it. 00:25:04.149 [2024-07-15 13:04:22.085457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.085515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.085707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.085796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.086048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.086107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.086339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.086398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.086602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.086661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.086888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.086947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.087193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.087251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.087458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.087517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.087726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.088035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.088094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.088351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.088413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.088670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.088734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.088970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.089033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.089256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.089319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.089577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.089635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.089865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.089931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.090130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.090193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.090466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.090529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.090812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.090908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.091129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.091192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.091441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.091504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.091725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.091812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.092052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.092114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.092352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.092415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.092632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.092705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.092935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.092998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.093208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.093271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.093517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.093579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.093797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.093885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.094133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.094197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.094432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.094495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.094705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.094787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.095013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.095077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.095312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.095375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.095585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.095648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.095886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.095951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.096187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.096250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.096469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.096532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.096763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.096828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.097086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.097150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.097391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.150 [2024-07-15 13:04:22.097455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.150 qpair failed and we were unable to recover it. 00:25:04.150 [2024-07-15 13:04:22.097688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.097768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.097984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.098047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.098292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.098354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.098592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.098655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.098925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.098989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.099204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.099268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.099503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.099566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.099755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.099819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.100056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.100120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.100331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.100395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.100633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.100707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.100971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.101035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.101271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.101334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.101573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.101637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.101899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.101964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.102172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.102235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.102473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.102536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.102782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.102848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.103050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.103112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.103321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.103384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.103619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.103682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.103934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.103998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.104215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.104277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.104494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.104780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.104846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.105087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.105150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.105387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.105449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.105628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.105691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.105958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.106022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.106199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.106261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.106446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.106508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.106783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.106848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.107077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.107140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.107372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.107435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.107673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.107754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.107972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.108034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.108242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.108305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.108541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.108614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.108832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.108898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.109106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.109169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.109404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.151 [2024-07-15 13:04:22.109466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.151 qpair failed and we were unable to recover it. 00:25:04.151 [2024-07-15 13:04:22.109643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.109705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.109971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.110035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.110277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.110341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.110542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.110605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.110823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.110888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.111105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.111167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.111403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.111467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.111703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.111780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.112023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.112087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.112332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.112396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.112644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.112708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.112950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.113013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.113288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.113492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.113555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.113792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.113857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.114072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.114135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.114345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.114619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.114682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.114933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.114997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.115234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.115296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.115539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.115602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.115811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.115877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.116087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.116150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.116358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.116421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.116671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.116733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.116961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.117024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.117263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.117327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.117540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.117603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.117844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.117910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.118123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.118186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.118404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.118466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.118700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.118780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.119026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.119090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.119299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.119361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.119600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.119663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.119931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.119996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.120241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.120304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.120522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.120595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.120837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.120901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.121077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.121140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.121309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.121372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.152 qpair failed and we were unable to recover it. 00:25:04.152 [2024-07-15 13:04:22.121619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.152 [2024-07-15 13:04:22.121682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.121884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.121949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.122183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.122247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.122550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.122613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.122858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.122923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.123238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.123301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.123519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.123581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.123848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.123912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.124209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.124272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.124517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.124581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.124877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.124942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.125247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.125310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.125614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.125677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.125923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.125988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.126249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.126312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.126570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.126633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.126923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.127271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.127334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.127598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.127662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.127915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.127980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.128248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.128310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.128637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.128702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.128936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.129004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.129248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.129321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.129580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.129644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.129931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.129996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.130293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.130357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.130613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.130685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.130946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.131011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.131302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.131365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.131653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.131715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.131974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.132038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.132300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.132364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.132565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.132628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.132904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.132969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.133197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.133261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.133556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.133620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.133898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.133962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.134278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.134341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.134577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.134640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.134907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.134971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.153 [2024-07-15 13:04:22.135190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.153 [2024-07-15 13:04:22.135253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.153 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.135439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.135501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.135772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.135837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.136134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.136205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.136513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.136576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.136828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.136893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.137146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.137209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.137529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.137593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.137824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.137888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.138196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.138268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.138568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.138632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.138898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.138963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.139258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.139321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.139620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.139690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.139936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.140000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.140302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.140365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.140589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.140661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.140932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.141009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.141321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.141384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.141705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.141787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.142060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.142124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.142425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.142488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.142782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.142847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.143075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.143138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.143401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.143465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.143726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.143830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.144069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.144397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.144460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.144751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.144816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.145096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.145160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.145417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.145481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.145750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.145814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.146096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.146159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.146462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.146525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.146824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.146889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.147196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.147259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.154 qpair failed and we were unable to recover it. 00:25:04.154 [2024-07-15 13:04:22.147551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.154 [2024-07-15 13:04:22.147614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.147897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.147962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.148223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.148288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.148543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.148621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.148884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.148938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.149209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.149272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.149533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.149597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.149835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.149900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.150121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.150184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.150489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.150553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.150865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.150931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.151205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.151269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.151525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.151589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.151872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.152180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.152244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.152459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.152523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.152769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.152835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.153071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.153135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.153356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.153418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.153729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.153814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.154047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.154350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.154413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.154660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.154734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.154974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.155038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.155349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.155421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.155734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.155815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.156097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.156160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.156407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.156471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.156803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.156869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.157179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.157245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.157528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.157592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.157845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.157909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.158246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.158309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.158603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.158663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.158907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.158972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.159215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.159278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.159602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.159666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.159895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.159960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.160294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.160357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.160640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.160703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.160939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.161004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.155 qpair failed and we were unable to recover it. 00:25:04.155 [2024-07-15 13:04:22.161294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.155 [2024-07-15 13:04:22.161366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.161712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.161794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.162033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.162096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.162400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.162464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.162703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.162784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.163054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.163119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.163423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.163487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.163784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.163849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.164109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.164173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.164468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.164531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.164836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.164901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.165176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.165240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.165547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.165618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.165863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.165928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.166240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.166304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.166615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.166677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.166922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.166986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.167263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.167328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.167607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.167672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.167949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.168013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.168305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.168369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.168700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.168784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.169016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.169079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.169350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.169414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.169718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.169810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.170122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.170186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.170449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.170513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.170828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.170903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.171200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.171263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.171540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.171604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.171863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.171926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.172207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.172271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.172556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.172620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.172896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.172960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.173282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.173346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.173610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.173674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.173927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.173990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.174267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.174330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.174589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.174652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.174913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.174978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.175257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.175321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.156 [2024-07-15 13:04:22.175617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.156 [2024-07-15 13:04:22.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.156 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.175972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.176036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.176323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.176387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.176674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.176757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.177037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.177101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.177369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.177433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.177728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.177810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.178119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.178183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.178482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.178546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.178849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.178914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.179197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.179261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.179557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.179621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.179868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.179930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.180246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.180320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.180642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.180706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.180984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.181048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.181268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.181332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.181646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.181710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.182044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.182109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.182409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.182474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.182780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.182846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.183169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.183233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.183500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.183563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.183900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.183964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.184298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.184361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.184640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.184701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.185048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.185113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.185412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.185476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.185734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.185813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.186129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.186193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.186465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.186530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.186839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.186905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.187221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.187283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.187564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.187630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.187948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.188013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.188315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.188378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.188645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.188709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.188998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.189064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.189368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.189431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.189748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.189813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.190091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.190154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.157 [2024-07-15 13:04:22.190474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.157 [2024-07-15 13:04:22.190538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.157 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.190847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.190913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.191219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.191293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.191590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.191653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.192000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.192065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.192339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.192401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.192732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.192812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.193059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.193124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.193425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.193488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.193763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.193828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.194147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.194210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.194476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.194539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.194859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.194925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.195195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.195269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.195574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.195638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.195919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.195984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.196314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.196378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.196684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.196761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.197070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.197133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.197413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.197477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.197856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.197942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.198212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.198277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.198581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.198644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.198971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.199037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.199322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.199387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.199689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.199786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.200049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.200113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.200456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.200521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.200841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.200907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.201200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.201264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.201588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.201652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.201943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.202009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.202259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.202323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.202645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.202709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.203006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.203071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.203343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.203407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.158 [2024-07-15 13:04:22.203690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.158 [2024-07-15 13:04:22.203766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.158 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.204094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.204159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.204437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.204502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.204833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.204898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.205211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.205285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.205616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.205680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.206014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.206080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.206355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.206420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.206713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.206791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.207071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.207134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.207402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.207466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.207792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.207858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.208175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.208239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.208485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.208549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.208838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.208905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.209183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.209247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.209466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.209527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.209822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.209886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.210227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.210292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.210572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.210636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.210977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.211042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.211322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.211386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.211628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.211692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.212022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.212086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.212361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.212425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.212696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.212778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.213094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.213158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.213445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.213510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.213831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.213896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.214199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.214263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.214590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.214655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.214993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.215068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.215386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.215450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.215786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.215853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.216125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.216190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.216469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.216533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.216821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.216887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.217216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.217596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.217660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.217995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.218061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.218375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.218439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.159 [2024-07-15 13:04:22.218763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.159 [2024-07-15 13:04:22.218829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.159 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.219145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.219209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.219499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.219563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.219838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.219904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.220225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.220290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.220570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.220633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.220977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.221048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.221364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.221421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.221702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.221782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.222027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.222092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.222406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.222470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.222797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.222865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.223179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.223244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.223507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.223569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.223859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.223926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.224245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.224309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.224638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.224701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.224997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.225062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.225369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.225433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.225766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.225830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.226117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.226180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.226496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.226560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.226878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.226944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.227227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.227291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.227621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.227684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.227992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.228058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.228370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.228434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.228725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.228809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.229066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.229130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.229410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.229473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.229714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.229806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.230105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.230170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.230439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.230503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.230774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.230840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.231112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.231179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.231494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.231559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.231837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.231903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.232172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.232237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.232513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.232577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.232903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.232968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.233260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.233324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.160 [2024-07-15 13:04:22.233599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.160 [2024-07-15 13:04:22.233662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.160 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.233929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.233993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.234276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.234660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.234724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.235078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.235143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.235421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.235486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.235756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.235821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.236098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.236163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.236474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.236538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.236848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.236915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.237239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.237303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.237618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.237681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.237995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.238059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.238408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.238472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.238761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.238823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.239106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.239170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.239419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.239483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.239813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.239888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.240208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.240271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.240584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.240649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.240947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.241013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.241324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.241388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.241648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.241712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.242067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.242132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.242469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.242533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.242817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.242880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.243205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.243269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.243580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.243644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.243975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.244040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.244323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.244386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.244680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.244757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.245066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.245131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.245439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.245503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.245825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.245891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.246217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.246282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.246512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.246575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.246855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.246921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.247245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.247309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.247531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.247595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.247881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.247946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.248270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.248334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.248655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.161 [2024-07-15 13:04:22.248720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.161 qpair failed and we were unable to recover it. 00:25:04.161 [2024-07-15 13:04:22.249049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.249114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.249394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.249458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.249780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.249854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.250147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.250212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.250538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.250602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.250919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.250985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.251276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.251340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.251649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.251713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.252013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.252077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.252406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.252470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.252759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.252824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.253115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.253180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.253469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.253533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.253817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.253882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.254152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.254215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.254526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.254590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.254889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.254953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.255270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.255335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.255656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.255720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.256020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.256085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.256343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.256407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.256636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.256696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.257045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.257391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.257454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.257725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.257803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.258080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.258145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.258424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.258488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.258829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.258895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.259212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.259276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.259603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.259666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.260013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.260079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.260401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.260752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.260816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.261150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.261214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.261538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.261602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.261923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.162 [2024-07-15 13:04:22.261989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.162 qpair failed and we were unable to recover it. 00:25:04.162 [2024-07-15 13:04:22.262308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.262373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.262682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.262762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.263092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.263157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.263483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.263548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.263811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.263875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.264195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.264258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.264572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.264636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.264990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.265057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.265367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.265430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.265650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.265715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.266054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.266118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.266405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.266469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.266707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.266786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.267101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.267166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.267480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.267545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.267827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.267893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.268208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.268271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.268579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.268642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.268980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.269046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.269327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.269390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.269717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.269795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.270125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.270189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.270503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.270567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.270879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.270943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.271240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.271304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.271622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.271686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.272013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.272080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.272412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.272477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.272795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.272861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.273176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.273240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.273514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.273577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.273867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.273932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.274260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.274324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.274648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.274712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.274966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.275040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.275356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.275421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.275754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.275819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.276088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.276154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.276452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.276516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.276789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.276855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.277172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.163 [2024-07-15 13:04:22.277237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.163 qpair failed and we were unable to recover it. 00:25:04.163 [2024-07-15 13:04:22.277553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.277616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.277923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.277988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.278300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.278364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.278592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.278652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.278984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.279049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.279370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.279435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.279725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.279814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.280144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.280209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.280485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.280549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.280866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.280932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.281223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.281287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.281562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.281624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.281938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.282004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.282288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.282352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.282549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.282612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.282829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.282893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.283107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.283169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.283377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.283441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.283679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.283756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.283970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.284033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.284220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.284292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.284531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.284596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.284807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.284872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.285105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.285169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.285497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.285561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.285794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.285858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.286087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.286151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.286365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.286428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.286689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.286774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.287010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.287074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.287331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.287394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.287626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.287690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.287927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.288003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.288274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.288337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.288583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.288648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.288901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.288967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.289185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.289248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.289489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.289552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.289811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.289876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.290082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.290145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.290354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.290419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.164 [2024-07-15 13:04:22.290657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.164 [2024-07-15 13:04:22.290720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.164 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.290986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.291049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.291246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.291310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.291527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.291590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.291849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.291915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.292136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.292200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.292450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.292527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.292729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.292809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.293046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.293109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.293347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.293410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.293656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.293719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.293956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.294020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.294205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.294269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.294476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.294540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.294791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.294856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.295046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.295109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.295366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.295430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.295681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.295771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.296046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.296108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.296349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.296412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.296682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.296767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.297013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.297087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.297306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.297370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.297612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.297676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.297897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.297960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.298186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.298251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.298494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.298558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.298799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.298865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.299095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.299158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.299377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.299440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.299683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.299761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.299981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.300049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.300305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.300368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.300635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.300699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.300954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.301020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.301274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.301337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.301583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.301647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.301932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.302008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.302247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.302311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.302573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.302637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.302873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.302937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.303181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.165 [2024-07-15 13:04:22.303244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.165 qpair failed and we were unable to recover it. 00:25:04.165 [2024-07-15 13:04:22.303451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.303514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.303795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.304095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.304158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.304343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.304406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.304645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.304708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.304958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.305024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.305257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.305320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.305560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.305624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.305836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.305901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.306111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.306175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.306410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.306474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.306709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.306787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.307002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.307065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.307287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.307351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.307587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.307650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.307873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.307938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.308176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.308239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.308477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.308540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.308785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.308850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.309103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.309166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.309407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.309714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.309797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.310020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.310083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.310319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.310382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.310616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.310679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.310883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.310948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.311155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.311218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.311465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.311528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.311735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.311835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.312054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.312118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.312329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.312391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.312625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.312688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.312944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.313016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.313255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.313319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.313563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.313626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.313833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.313898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.314139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.314203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.166 [2024-07-15 13:04:22.314442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.166 [2024-07-15 13:04:22.314506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.166 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.314706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.314789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.315003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.315084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.315345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.315410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.315618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.315680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.315913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.315977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.316212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.316509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.316572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.316823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.316889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.317116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.317179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.317393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.317457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.317691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.317766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.318000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.318063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.318274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.318336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.318584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.318648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.318843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.318908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.319145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.319208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.319417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.319481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.319667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.319730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.320004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.320068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.320280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.320343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.320559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.320621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.320813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.320888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.321129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.321194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.321406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.321468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.321705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.321784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.321976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.322039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.322311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.322521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.322790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.322854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.323048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.323111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.323321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.323590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.323652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.323923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.323988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.324202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.324265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.324504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.324566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.324825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.324891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.325125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.325190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.325425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.325488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.167 [2024-07-15 13:04:22.325731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.167 [2024-07-15 13:04:22.325811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.167 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.326010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.326074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.326279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.326342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.326561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.326624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.326870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.326934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.327183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.327249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.327446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.327509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.327730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.327830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.328069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.328133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.328351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.328414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.328634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.328697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.328940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.329005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.329185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.329249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.329429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.329492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.329726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.330028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.330092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.330307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.330370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.330580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.330643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.330870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.330936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.168 [2024-07-15 13:04:22.331178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.168 [2024-07-15 13:04:22.331242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.168 qpair failed and we were unable to recover it. 00:25:04.443 [2024-07-15 13:04:22.331456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.331519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.331697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.331778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.332003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.332067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.332280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.332343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.332559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.332623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.332860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.332924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.333131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.333195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.333411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.333474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.333660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.333722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.333963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.334027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.334265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.334329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.334560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.334623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.334840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.334906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.335113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.335177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.335384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.335448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.335683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.335781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.335998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.336062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.336274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.336338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.336562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.336625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.336872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.336938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.337149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.337212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.337449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.337512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.337692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.337771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.338013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.338076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.338283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.338346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.338551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.338614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.338858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.338923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.339161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.339224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.339432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.339495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.339753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.339817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.340030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.340094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.340322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.340395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.340577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.340641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.340872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.340937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.341147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.341211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.341425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.341489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.341722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.341796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.342015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.342079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.342285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.342349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.444 [2024-07-15 13:04:22.342565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.444 [2024-07-15 13:04:22.342629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.444 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.342851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.342916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.343151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.343215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.343426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.343696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.343797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.344012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.344077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.344332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.344396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.344575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.344638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.344880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.344945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.345190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.345253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.345463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.345736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.346052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.346115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.346354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.346417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.346628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.346690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.346961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.347024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.347264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.347327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.347563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.347625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.347835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.347899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.348143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.348216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.348425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.348489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.348695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.348778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.349032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.349096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.349330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.349394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.349586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.349649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.349875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.349939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.350184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.350247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.350454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.350517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.350766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.350830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.351047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.351111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.351323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.351386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.351625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.351688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.351967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.352031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.352253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.352316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.352506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.352569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.352807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.352873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.353116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.353179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.353389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.353452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.353688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.353765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.353978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.354042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.354277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.354341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.445 [2024-07-15 13:04:22.354552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.445 [2024-07-15 13:04:22.354616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.445 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.354830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.354896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.355146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.355210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.355453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.355516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.355768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.355833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.356077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.356397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.356461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.356646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.356710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.356916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.356979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.357212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.357276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.357488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.357551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.357791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.357855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.358061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.358124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.358359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.358422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.358636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.358698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.358964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.359028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.359263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.359326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.359566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.359628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.359875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.359940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.360185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.360249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.360460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.360524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.360767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.360832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.361071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.361135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.361369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.361432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.361666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.361730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.361945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.362009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.362254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.362317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.362531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.362595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.362838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.362903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.363112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.363176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.363410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.363473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.363715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.363794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.364030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.364094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.364316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.364380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.364589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.364651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.364911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.364975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.365220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.365284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.365524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.365587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.365800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.365864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.366073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.366135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.366344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.446 [2024-07-15 13:04:22.366407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.446 qpair failed and we were unable to recover it. 00:25:04.446 [2024-07-15 13:04:22.366616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.366679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.366939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.367004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.367210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.367273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.367487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.367551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.367796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.367862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.368076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.368140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.368355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.368418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.368659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.368723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.368973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.369037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.369260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.369323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.369565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.369628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.369846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.369910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.370142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.370205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.370436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.370499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.370714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.370792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.370999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.371062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.371265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.371328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.371533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.371596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.371825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.371890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.372141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.372413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.372476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.372714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.372795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.373037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.373100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.373309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.373372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.373587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.373650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.373899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.373964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.374181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.374244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.374450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.374513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.374724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.374802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.374991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.375054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.375260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.375323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.375540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.375603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.375840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.375914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.376160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.376224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.376438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.376501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.376723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.376805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.377003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.377066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.377249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.377312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.377544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.377607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.377855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.377920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.378132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.447 [2024-07-15 13:04:22.378195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.447 qpair failed and we were unable to recover it. 00:25:04.447 [2024-07-15 13:04:22.378428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.378491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.378723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.378810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.379002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.379041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.379214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.379252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.379403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.379440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.379595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.379633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.379797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.379832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.380006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.380039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.380236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.380300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.380546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.380609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.380839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.380873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.381043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.381079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.381254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.381317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.381532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.381594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.381805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.381839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.382014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.382064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.382228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.382291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.382529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.382592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.382829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.382868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.383059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.383095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.383287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.383350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.383558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.383620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.383859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.383894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.384071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.384269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.384331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.384547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.384610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.384842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.384876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.385064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.385100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.385295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.385358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.385597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.385659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.385918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.385952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.386099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.386136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.386305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.386368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.386604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.386668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.448 qpair failed and we were unable to recover it. 00:25:04.448 [2024-07-15 13:04:22.386899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.448 [2024-07-15 13:04:22.386933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.387094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.387131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.387275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.387339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.387558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.387621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.387835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.387870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.388016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.388067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.388236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.388300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.388541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.388605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.388816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.388850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.389025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.389081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.389233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.389296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.389513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.389577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.389818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.389853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.389972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.390006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.390152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.390219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.390455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.390517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.390767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.390829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.390982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.391015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.391201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.391264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.391476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.391538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.391784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.391837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.392012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.392063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.392236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.392298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.392534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.392597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.392814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.392849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.393000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.393033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.393248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.393311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.393548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.393612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.393815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.393849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.394000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.394033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.394202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.394265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.394499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.394562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.394795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.394831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.394976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.395009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.395173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.395236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.395453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.395517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.395730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.395828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.395987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.396020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.396209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.396273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.396494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.396558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.396762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.396823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.449 [2024-07-15 13:04:22.397000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.449 [2024-07-15 13:04:22.397054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.449 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.397302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.397366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.397602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.397665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.397919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.397953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.398120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.398183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.398370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.398433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.398640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.398703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.398954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.398987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.399177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.399239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.399448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.399511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.399760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.399821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.399968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.400006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.400239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.400304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.400544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.400608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.400850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.400885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.401079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.401142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.401347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.401410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.401656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.401719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.401957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.401990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.402145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.402208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.402443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.402507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.402765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.402822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.402968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.403002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.403187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.403249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.403452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.403514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.403734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.403825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.403985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.404018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.404223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.404286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.404493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.404556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.404793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.404857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.405096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.405159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.405373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.405435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.405669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.405733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.406055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.406139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.406346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.406404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.406574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.406632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.406857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.406934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.407148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.407222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.407421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.407700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.407791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.408001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.408061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.450 [2024-07-15 13:04:22.408276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.450 [2024-07-15 13:04:22.408334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.450 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.408559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.408618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.408853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.408932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.409167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.409242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.409464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.409522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.409756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.409815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.410062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.410138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.410342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.410417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.410592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.410888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.410966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.411211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.411288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.411497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.411555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.411769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.411829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.412031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.412114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.412302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.412377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.412568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.412627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.412860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.412939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.413143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.413218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.413390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.413449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.413673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.413731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.413982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.414065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.414265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.414324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.414525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.414582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.414795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.414854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.415067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.415135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.415343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.415402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.415624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.415682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.415910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.415969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.416206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.416264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.416497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.416556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.416772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.416833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.417056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.417115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.417315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.417392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.417619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.417677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.417897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.417975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.418152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.418228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.418428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.418487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.418713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.418786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.419072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.419166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.419424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.419486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.419697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.419781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.420024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.451 [2024-07-15 13:04:22.420084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.451 qpair failed and we were unable to recover it. 00:25:04.451 [2024-07-15 13:04:22.420307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.420371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.420612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.420676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.420949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.421009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.421276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.421341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.421606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.421671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.421912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.421972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.422225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.422289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.422479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.422543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.422772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.422832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.423083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.423159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.423378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.423443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.423679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.423760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.424017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.424076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.424298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.424363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.424602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.424666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.424908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.424968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.425168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.425227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.425473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.425537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.425763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.425843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.426067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.426131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.426392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.426455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.426671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.426735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.427018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.427098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.427330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.427395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.427654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.427719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.427988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.428064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.428298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.428362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.428572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.428636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.428886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.428947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.429133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.429198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.429411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.429475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.429718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.429828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.430032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.430091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.430353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.430417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.430654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.452 [2024-07-15 13:04:22.430718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.452 qpair failed and we were unable to recover it. 00:25:04.452 [2024-07-15 13:04:22.430988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.431067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.431339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.431398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.431646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.431709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.431979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.432039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.432295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.432358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.432594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.432657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.432926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.432992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.433241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.433305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.433521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.433585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.433825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.433890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.434131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.434195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.434413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.434478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.434718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.434795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.435019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.435084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.435297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.435371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.435558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.435622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.435864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.435930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.436183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.436247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.436495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.436559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.436778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.436844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.437084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.437148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.437398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.437462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.437681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.437762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.438005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.438069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.438290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.438354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.438567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.438630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.438883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.438948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.439193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.439257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.439519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.439583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.439825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.439890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.440106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.440171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.440389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.440452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.440694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.440772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.440973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.441037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.441256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.441319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.441514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.441578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.441819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.441884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.442120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.442184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.442425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.442489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.442702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.442779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.443024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.453 [2024-07-15 13:04:22.443087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.453 qpair failed and we were unable to recover it. 00:25:04.453 [2024-07-15 13:04:22.443347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.443412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.443659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.443722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.443997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.444062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.444273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.444336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.444552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.444615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.444856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.444921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.445131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.445194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.445407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.445471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.445713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.445791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.446029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.446093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.446311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.446375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.446617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.446681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.446913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.446977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.447226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.447300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.447518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.447581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.447825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.447890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.448076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.448140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.448361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.448425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.448631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.448694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.448956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.449021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.449258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.449323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.449562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.449625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.449848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.449915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.450158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.450223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.450464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.450528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.450772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.450838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.451086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.451150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.451391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.451455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.451654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.451717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.451958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.452023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.452241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.452305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.452533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.452598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.452810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.452875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.453063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.453127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.453348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.453413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.453629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.453692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.453921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.453986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.454243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.454307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.454523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.454586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.454844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.454910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.455160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.454 [2024-07-15 13:04:22.455224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.454 qpair failed and we were unable to recover it. 00:25:04.454 [2024-07-15 13:04:22.455449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.455765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.455831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.456045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.456351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.456415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.456655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.456719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.456979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.457043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.457257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.457321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.457563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.457627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.457887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.457951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.458170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.458233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.458475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.458539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.458767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.458833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.459075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.459149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.459368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.459432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.459674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.459770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.459990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.460054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.460246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.460310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.460548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.460610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.460830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.460896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.461109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.461174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.461391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.461454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.461672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.461750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.461953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.462018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.462257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.462320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.462563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.462626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.462884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.462949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.463180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.463245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.463476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.463539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.463784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.463849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.464092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.464156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.464366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.464430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.464643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.464707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.464978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.465044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.465252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.465316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.465529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.465592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.465804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.466092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.466395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.466458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.466678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.466755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.466982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.467047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.455 [2024-07-15 13:04:22.467284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.455 [2024-07-15 13:04:22.467349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.455 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.467585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.467649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.467897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.467964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.468206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.468270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.468476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.468540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.468791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.468856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.469107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.469172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.469388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.469452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.469662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.469726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.469965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.470028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.470270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.470335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.470574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.470638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.470871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.470946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.471163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.471227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.471446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.471509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.471719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.471802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.472059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.472124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.472369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.472433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.472671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.472735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.472974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.473039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.473286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.473349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.473595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.473659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.473919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.473984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.474233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.474296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.474540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.474603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.474844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.474909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.475161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.475226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.475446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.475510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.475777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.475843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.476057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.476121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.476338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.476402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.476618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.476682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.476916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.476981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.477220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.477284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.477523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.477587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.477801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.477866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.456 qpair failed and we were unable to recover it. 00:25:04.456 [2024-07-15 13:04:22.478078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.456 [2024-07-15 13:04:22.478142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.478324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.478593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.478656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.478901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.478969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.479206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.479271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.479482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.479547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.479771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.479837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.480084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.480147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.480388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.480452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.480664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.480728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.480979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.481044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.481253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.481317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.481527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.481590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.481829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.481895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.482105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.482170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.482386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.482449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.482658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.482732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.482995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.483060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.483300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.483363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.483546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.483611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.483862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.483927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.484155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.484219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.484456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.484520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.484767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.484832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.485048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.485112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.485330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.485393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.485608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.485672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.485929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.485994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.486236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.486300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.486513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.486576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.486832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.486899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.487150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.487213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.487451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.487515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.487736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.487813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.488054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.488118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.488369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.488433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.488645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.488710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.488973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.489037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.489275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.489339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.489575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.489638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.489916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.457 [2024-07-15 13:04:22.489983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.457 qpair failed and we were unable to recover it. 00:25:04.457 [2024-07-15 13:04:22.490189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.490252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.490492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.490556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.490774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.491096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.491161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.491370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.491434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.491677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.491754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.491933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.491998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.492210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.492274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.492514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.492578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.492816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.492882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.493095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.493159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.493393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.493456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.493699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.493776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.493990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.494055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.494271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.494335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.494574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.494638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.494917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.494983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.495232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.495296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.495534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.495598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.495810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.496109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.496188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.496401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.496465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.496679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.496770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.497017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.497082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.497299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.497363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.497604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.497668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.497926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.497992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.498237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.498301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.498519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.498583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.498843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.498908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.499126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.499190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.499429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.499493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.499707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.499789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.500009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.500073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.500304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.500368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.500583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.500647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.500909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.500974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.501229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.501293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.501530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.501594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.501810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.501877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.502131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.458 [2024-07-15 13:04:22.502195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.458 qpair failed and we were unable to recover it. 00:25:04.458 [2024-07-15 13:04:22.502411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.502476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.502687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.502773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.503015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.503079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.503302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.503365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.503574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.503637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.503884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.503949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.504161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.504225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.504412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.504475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.504715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.504794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.504988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.505052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.505303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.505367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.505582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.505645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.505920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.505986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.506202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.506267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.506483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.506547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.506763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.506828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.507069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.507134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.507387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.507451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.507659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.507722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.507959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.508023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.508262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.508327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.508569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.508633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.508890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.508956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.509169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.509233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.509473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.509536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.509767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.509856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.510108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.510172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.510419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.510483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.510717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.510800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.511012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.511070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.511285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.511349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.511591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.511655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.511919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.511985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.512235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.512299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.512541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.512603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.512819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.512885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.513098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.513162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.513353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.513416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.513601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.513664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.513924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.459 [2024-07-15 13:04:22.513990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.459 qpair failed and we were unable to recover it. 00:25:04.459 [2024-07-15 13:04:22.514231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.514294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.514533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.514606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.514853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.514918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.515128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.515192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.515398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.515461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.515675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.515752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.515979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.516043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.516237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.516301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.516542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.516607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.516817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.516883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.517129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.517194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.517414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.517477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.517692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.517767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.517986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.518051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.518301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.518365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.518624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.518690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.518954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.519019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.519259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.519323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.519562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.519626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.519857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.519923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.520132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.520196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.520432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.520497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.520713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.520814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.521060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.521125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.521367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.521431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.521644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.521950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.522016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.522257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.522321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.522558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.522623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.522841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.522907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.523116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.523180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.523426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.523491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.523707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.523791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.524036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.524100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.524314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.524378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.524626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.524690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.524953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.525018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.525233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.525297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.525542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.525608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.525853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.525918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.526161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.460 [2024-07-15 13:04:22.526226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.460 qpair failed and we were unable to recover it. 00:25:04.460 [2024-07-15 13:04:22.526463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.526536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.526768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.526834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.527085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.527149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.527359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.527422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.527667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.527731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.527991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.528056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.528266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.528329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.528539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.528603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.528794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.528860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.529102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.529166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.529377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.529442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.529686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.529781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.529995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.530060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.530301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.530365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.530624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.530687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.530953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.531018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.531242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.531306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.531558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.531621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.531857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.531924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.532137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.532201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.532463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.532525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.532755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.532819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.533041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.533106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.533357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.533420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.533667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.533730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.533956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.534020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.534235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.534299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.534549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.534613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.534859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.534924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.535162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.535225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.535481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.535545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.535726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.535807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.536023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.536087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.536267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.536331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.536545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.536608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.461 qpair failed and we were unable to recover it. 00:25:04.461 [2024-07-15 13:04:22.536856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.461 [2024-07-15 13:04:22.536922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.537164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.537228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.537439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.537504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.537758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.537823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.538059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.538123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.538361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.538441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.538660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.538724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.538982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.539047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.539266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.539329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.539517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.539581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.539799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.539865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.540080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.540143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.540385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.540449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.540691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.540772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.541015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.541078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.541339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.541404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.541615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.541680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.541932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.541998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.542196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.542260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.542487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.542551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.542753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.542818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.543039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.543102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.543358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.543421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.543608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.543672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.543927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.543993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.544231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.544294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.544503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.544567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.544788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.544854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.545069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.545133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.545371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.545435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.545646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.545710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.545982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.546047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.546295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.546358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.546594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.546658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.546917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.546982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.547223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.547286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.547500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.547564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.547805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.547871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.548110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.548174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.548390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.548454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.548665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.548730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.462 qpair failed and we were unable to recover it. 00:25:04.462 [2024-07-15 13:04:22.548963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.462 [2024-07-15 13:04:22.549027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.549237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.549302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.549542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.549605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.549855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.549921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.550133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.550207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.550425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.550488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.550691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.550769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.551022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.551086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.551324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.551387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.551600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.551664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.551892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.551957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.552177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.552241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.552461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.552525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.552703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.552781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.553018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.553082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.553323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.553387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.553608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.553672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.553948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.554272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.554336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.554580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.554644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.554908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.554974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.555212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.555276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.555496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.555559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.555799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.555864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.556113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.556177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.556399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.556463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.556702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.556780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.556997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.557061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.557274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.557339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.557549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.557613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.557856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.557921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.558173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.558238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.558478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.558542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.558782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.558847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.559087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.559151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.559396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.559460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.559643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.559707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.559947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.560011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.560226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.560290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.560497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.560560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.560796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.463 [2024-07-15 13:04:22.560862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.463 qpair failed and we were unable to recover it. 00:25:04.463 [2024-07-15 13:04:22.561076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.561140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.561382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.561446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.561683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.561758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.561972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.562045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.562295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.562359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.562536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.562599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.562822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.562887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.563125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.563190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.563403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.563466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.563704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.563781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.564002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.564067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.564273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.564338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.564541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.564605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.564794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.564861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.565114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.565179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.565387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.565451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.565695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.565773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.566026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.566090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.566303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.566368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.566607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.566671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.566893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.566959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.567187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.567252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.567493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.567557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.567778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.567843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.568055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.568120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.568330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.568395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.568605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.568669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.568951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.569017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.569264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.569328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.569560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.569623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.569907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.569974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.570224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.570504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.570568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.570782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.570848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.571092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.571156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.571404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.571469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.571705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.571783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.572006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.572065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.572276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.572340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.572557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.572621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.572864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.464 [2024-07-15 13:04:22.572929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.464 qpair failed and we were unable to recover it. 00:25:04.464 [2024-07-15 13:04:22.573169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.573233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.573457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.573521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.573730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.573819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.574033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.574098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.574308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.574372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.574581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.574645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.574879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.574945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.575194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.575258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.575508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.575573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.575815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.575881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.576130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.576193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.576433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.576497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.576769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.576834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.577046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.577110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.577313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.577377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.577587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.577651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.577931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.577997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.578220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.578285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.578496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.578561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.578783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.578850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.579069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.579133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.579347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.579411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.579625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.579689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.465 [2024-07-15 13:04:22.579936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.465 [2024-07-15 13:04:22.580002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.465 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.580248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.580312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.580548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.580613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.580800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.580864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.581075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.581139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.581379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.581444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.581700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.581782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.581983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.582048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.582268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.582332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.582552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.582616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.582812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.582878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.583125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.583190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.583425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.583489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.583686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.583762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.584002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.584066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.584277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.584341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.584583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.584647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.584910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.584975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.585191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.585256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.585492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.585565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.585807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.585873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.586102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.586166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.586384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.586448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.586659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.586723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.586950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.587015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.587235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.587299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.587509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.587573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.587825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.587890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.588127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.588192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.588434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.588498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.588750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.588816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.589030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.589094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.589311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.589376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.589609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.589674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.589942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.590006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.590243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.590307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.590548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.590613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.590825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.590890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.591109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.591173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.591392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.591456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.591642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.591706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.466 qpair failed and we were unable to recover it. 00:25:04.466 [2024-07-15 13:04:22.591939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.466 [2024-07-15 13:04:22.592005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.592216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.592281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.592520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.592585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.592808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.592873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.593064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.593127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.593376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.593440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.593656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.593720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.593985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.594049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.594289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.594598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.594663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.594912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.594977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.595189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.595252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.595464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.595528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.595775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.595841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.596064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.596342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.596406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.596633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.596697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.596926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.596991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.597205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.597283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.597515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.597573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.597817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.597878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.598083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.598142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.598410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.598470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.598710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.598802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.599019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.599097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.599336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.599400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.599658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.599717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.599976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.600058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.600292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.600351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.600602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.600667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.600904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.600987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.601246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.601311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.601542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.601608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.601842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.601907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.602126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.602191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.602411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.602476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.602714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.602808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.603048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.603113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.603352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.603416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.603641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.603705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.603959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.467 [2024-07-15 13:04:22.604025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.467 qpair failed and we were unable to recover it. 00:25:04.467 [2024-07-15 13:04:22.604240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.604305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.604514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.604577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.604792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.604858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.605066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.605131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.605364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.605429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.605649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.605715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.605953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.606017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.606231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.606295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.606516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.606581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.606816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.606882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.607092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.607397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.607461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.607709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.607786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.607981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.608047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.608284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.608349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.608590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.608653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.608914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.608995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.609219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.609515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.609592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.609799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.609881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.610142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.610206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.610425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.610503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.610707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.610796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.611004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.611064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.611293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.611374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.611584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.611649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.611884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.611949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.612202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.612266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.612514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.612578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.612787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.612853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.613099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.613164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.613404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.613469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.613680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.613753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.613988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.614049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.614251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.614310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.614511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.614570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.614799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.614860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.615087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.615147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.615386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.615463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.615670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.468 [2024-07-15 13:04:22.615763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.468 qpair failed and we were unable to recover it. 00:25:04.468 [2024-07-15 13:04:22.615965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.616025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.616247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.616312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.616527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.616592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.616843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.616910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.617163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.617229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.617413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.617476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.617692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.617769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.617995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.618056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.618295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.618360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.618608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.618672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.618948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.619015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.619265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.619330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.619547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.619611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.619833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.619899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.620110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.620175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.620412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.620476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.620722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.620802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.621016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.621090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.621330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.621394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.621642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.621706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.621970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.622035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.622246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.622311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.622526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.622591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.622837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.622903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.623146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.623211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.623429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.623494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.623703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.623780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.624029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.624093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.624302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.624367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.624584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.624648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.624870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.624936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.625197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.625261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.625479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.625544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.625731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.625809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.626021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.626085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.626324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.626389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.626629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.469 [2024-07-15 13:04:22.626694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.469 qpair failed and we were unable to recover it. 00:25:04.469 [2024-07-15 13:04:22.626978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.627044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.627257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.627320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.627540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.627605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.627843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.627910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.628122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.628186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.628423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.628487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.628757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.628823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.629071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.629135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.629374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.629439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.629655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.629729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.630017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.630077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.630279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.630338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.630539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.630598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.630829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.630890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.631126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.631187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.631391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.631470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.631659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.631723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.631972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.632038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.632289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.632348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.632548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.632608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.632811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.632881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.633122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.633181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.633411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.633471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.633682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.470 [2024-07-15 13:04:22.633753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.470 qpair failed and we were unable to recover it. 00:25:04.470 [2024-07-15 13:04:22.633977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.634036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.634296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.634356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.634565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.634623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.634904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.634973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.635233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.635312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.635519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.635582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.635831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.635893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.636100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.636162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.636371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.636436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.636639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.636700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.637022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.637087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.637322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.637384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.637633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.637693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.637948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.638010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.638250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.638311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.638510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.638570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.638818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.638880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.639109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.639168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.639374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.639433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.639636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.639695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.639938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.639998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.640207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.640267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.640469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.640528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.640772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.640842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.641071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.641132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.641336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.641396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.641607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.641686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.641929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.641989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.642220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.642284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.642524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.642583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.642790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.642852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.643083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.643142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.643372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.744 [2024-07-15 13:04:22.643431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.744 qpair failed and we were unable to recover it. 00:25:04.744 [2024-07-15 13:04:22.643618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.643679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.643908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.643967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.644196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.644257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.644500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.644560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.644785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.644853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.645074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.645137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.645348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.645412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.645685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.645767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.646023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.646087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.646324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.646388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.646627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.646691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.646953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.647018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.647257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.647322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.647536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.647599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.647834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.647901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.648114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.648174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.648394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.648458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.648687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.648767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.649016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.649081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.649296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.649359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.649606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.649670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.649905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.649971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.650184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.650249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.650497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.650561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.650796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.650861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.651047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.651111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.651325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.651389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.651638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.651701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.651906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.651970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.652209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.652273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.652464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.652539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.652780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.652846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.653029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.653094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.653306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.653370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.653581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.653645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.653897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.653958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.745 [2024-07-15 13:04:22.654197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.745 [2024-07-15 13:04:22.654257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.745 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.654467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.654526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.654730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.654800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.655001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.655060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.655264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.655323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.655536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.655614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.655843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.655903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.656131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.656189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.656382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.656442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.656679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.656753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.656961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.657021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.657224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.657284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.657484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.657543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.657762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.657823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.658022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.658082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.658289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.658348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.658556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.658615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.658841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.658902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.659066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.659125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.659356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.659415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.659645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.659724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.659933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.659994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.660196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.660256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.660495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.660559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.660776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.660842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.661082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.661147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.661385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.661444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.661677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.661750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.661977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.662056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.662300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.662364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.662602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.662666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.662931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.662992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.663228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.663292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.663514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.663573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.663797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.663867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.664109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.664169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.664403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.746 qpair failed and we were unable to recover it. 00:25:04.746 [2024-07-15 13:04:22.664690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.746 [2024-07-15 13:04:22.664767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.665001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.665084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.665325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.665389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.665644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.665935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.666000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.666239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.666302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.666542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.666606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.666845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.666911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.667109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.667174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.667392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.667456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.667698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.667780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.668042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.668107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.668291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.668356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.668531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.668594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.668833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.668900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.669141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.669205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.669427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.669492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.669752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.669818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.670029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.670093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.670311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.670376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.670618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.670682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.670934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.670999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.671241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.671305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.671555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.671620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.671885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.671950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.672162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.672226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.672444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.672508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.672762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.672828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.673066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.673131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.673350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.673414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.673596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.673660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.673914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.673980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.674220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.674283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.674524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.674589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.674833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.674899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.675112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.675176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.675396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.675459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.675697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.675787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.676007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.676072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.747 qpair failed and we were unable to recover it. 00:25:04.747 [2024-07-15 13:04:22.676321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.747 [2024-07-15 13:04:22.676385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.676625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.676690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.676953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.677018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.677230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.677295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.677484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.677548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.677751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.677817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.678029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.678095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.678336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.678400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.678618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.678683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.678923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.678992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.679233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.679297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.679544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.679610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.679856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.679923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.680164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.680228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.680467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.680531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.680757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.680823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.681031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.681095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.681331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.681395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.681637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.681701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.681959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.682025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.682238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.682303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.682548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.682612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.682826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.682892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.683131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.683196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.683376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.683441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.683691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.683776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.684006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.684071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.684280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.684344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.684530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.684594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.684833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.684899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.685143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.685207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.685445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.748 [2024-07-15 13:04:22.685509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.748 qpair failed and we were unable to recover it. 00:25:04.748 [2024-07-15 13:04:22.685767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.685832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.686077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.686142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.686354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.686418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.686612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.686677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.686900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.686965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.687216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.687280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.687529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.687603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.687845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.687909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.688152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.688216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.688426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.688490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.688730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.688811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.689050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.689114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.689330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.689395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.689629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.689693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.689955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.690020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.690242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.690306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.690541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.690605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.690844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.690909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.691129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.691193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.691432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.691496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.691721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.691801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.692047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.692111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.692297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.692361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.692578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.692642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.692863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.692929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.693168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.693232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.693472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.693536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.693776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.693842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.694034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.694098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.694345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.694409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.694650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.694715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.694972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.695037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.695249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.695313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.695539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.695604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.695825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.695891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.696075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.749 [2024-07-15 13:04:22.696140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.749 qpair failed and we were unable to recover it. 00:25:04.749 [2024-07-15 13:04:22.696363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.696427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.696670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.696736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.696999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.697057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.697266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.697331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.697569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.697633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.697866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.697926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.698159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.698219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.698452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.698511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.698722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.698794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.699027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.699087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.699315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.699383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.699591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.699650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.699865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.699927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.700152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.700211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.700444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.700503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.700727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.700802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.701044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.701103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.701311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.701370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.701605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.701682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.701950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.702011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.702275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.702339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.702581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.702640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.702904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.702965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.703171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.703231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.703468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.703528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.703730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.703806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.704049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.704109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.704338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.704399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.704606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.704665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.704882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.704943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.705179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.705239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.705437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.705496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.705672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.705732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.705981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.706040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.706239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.706299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.750 [2024-07-15 13:04:22.706533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.750 [2024-07-15 13:04:22.706593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.750 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.706824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.706885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.707106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.707186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.707417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.707477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.707706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.707778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.708001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.708082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.708284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.708343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.708607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.708667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.708898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.708965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.709213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.709271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.709479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.709539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.709804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.709870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.710116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.710180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.710426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.710490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.710715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.710808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.711061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.711135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.711377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.711441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.711667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.711732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.712004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.712068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.712309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.712370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.712580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.712645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.712908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.712973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.713184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.713249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.713434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.713498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.713754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.713819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.714033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.714347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.714405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.714650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.714715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.715012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.715071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.715327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.715392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.715639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.715702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.715978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.716038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.716276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.716341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.716557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.716620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.716888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.716948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.717191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.717256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.717469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.717533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.717794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.717873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.751 qpair failed and we were unable to recover it. 00:25:04.751 [2024-07-15 13:04:22.718112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.751 [2024-07-15 13:04:22.718177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.718417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.718481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.718695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.718789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.719024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.719102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.719307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.719372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.719607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.719671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.719933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.719998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.720240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.720305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.720522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.720586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.720799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.720864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.721071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.721135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.721316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.721380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.721636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.721699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.721965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.722031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.722273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.722338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.722577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.722642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.722923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.722989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.723237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.723311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.723539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.723603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.723852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.723918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.724139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.724203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.724419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.724484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.724697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.724774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.725022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.725087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.725330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.725394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.725631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.725695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.725935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.726002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.726220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.726286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.726528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.726805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.726870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.727113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.727177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.727403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.727468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.727714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.727796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.728039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.728104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.728318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.728384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.728635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.728698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.728932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.728997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.729234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.729298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.729537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.729601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.752 qpair failed and we were unable to recover it. 00:25:04.752 [2024-07-15 13:04:22.729820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.752 [2024-07-15 13:04:22.729886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.730098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.730161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.730387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.730451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.730695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.730791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.731015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.731080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.731312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.731376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.731610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.731674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.731953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.732019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.732267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.732331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.732579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.732643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.732967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.733032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.733276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.733339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.733559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.733623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.733956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.734020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.734226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.734289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.734514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.734579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.734821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.734886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.735214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.735278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.735518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.735593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.735839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.735904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.736129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.736194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.736544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.736616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.736854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.736920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.737180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.737244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.737497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.737561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.737797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.737863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.738062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.738126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.738364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.738428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.738692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.738786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.739084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.739148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.739391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.739455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.739666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.753 [2024-07-15 13:04:22.739730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.753 qpair failed and we were unable to recover it. 00:25:04.753 [2024-07-15 13:04:22.740007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.740071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.740362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.740426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.740716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.740797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.741030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.741102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.741316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.741380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.741646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.741711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.742068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.742132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.742385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.742449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.742690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.742770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.742978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.743043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.743276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.743340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.743583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.743648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.743881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.743947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.744269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.744340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.744633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.744697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.744941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.745006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.745218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.745283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.745520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.745584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.745884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.745962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.746330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.746394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.746642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.746706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.746976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.747040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.747220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.747292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.747525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.747589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.747951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.748188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.748252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.748582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.748656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.748966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.749031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.749250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.749316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.749553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.749617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.749984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.750051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.750364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.750429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.750628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.750692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.751035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.751100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.751410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.751474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.754 qpair failed and we were unable to recover it. 00:25:04.754 [2024-07-15 13:04:22.751702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.754 [2024-07-15 13:04:22.751780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.752110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.752174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.752368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.752431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.752608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.752673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.752899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.752963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.753237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.753302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.753632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.753701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.754070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.754135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.754372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.754436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.754805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.754871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.755202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.755267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.755529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.755594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.755829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.755894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.756180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.756244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.756602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.756666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.756913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.756979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.757216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.757280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.757571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.757635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.757845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.757912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.758175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.758240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.758451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.758514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.758757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.758822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.759115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.759180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.759365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.759430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.759708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.759790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.760053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.760118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.760303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.760378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.760606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.760671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.760985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.761063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.761429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.761494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.761795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.761871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.762184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.762259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.762526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.762591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.762795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.762861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.763139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.763203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.763427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.763491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.763868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.755 [2024-07-15 13:04:22.763933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.755 qpair failed and we were unable to recover it. 00:25:04.755 [2024-07-15 13:04:22.764243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.764308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.764544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.764608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.764939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.765005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.765201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.765266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.765494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.765564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.765783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.765850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.766066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.766132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.766473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.766537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.766821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.766886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.767169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.767233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.767443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.767507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.767718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.767796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.768066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.768131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.768412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.768477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.768669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.768734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.768980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.769044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.769270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.769336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.769723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.769806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.770043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.770108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.770330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.770395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.770695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.770801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.771136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.771201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.771408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.771473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.771703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.771785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.772023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.772088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.772297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.772372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.772607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.772672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.772980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.773045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.773310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.773375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.773653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.773717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.774038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.774102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.774334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.774399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.774664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.774728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.774984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.775059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.775377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.756 [2024-07-15 13:04:22.775452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.756 qpair failed and we were unable to recover it. 00:25:04.756 [2024-07-15 13:04:22.775773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.776082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.776148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.776336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.776400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.776615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.776679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.777066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.777131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.777443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.777509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.777717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.777796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.778035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.778100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.778414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.778484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.778785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.778850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.779174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.779247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.779542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.779606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.779818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.779884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.780093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.780158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.780381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.780457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.780747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.780813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.781143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.781209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.781499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.781563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.781890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.781956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.782218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.782284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.782547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.782611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.782850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.782918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.783196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.783261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.783560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.783625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.783970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.784044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.784358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.784422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.784665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.784731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.785032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.785097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.785307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.785371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.785603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.785668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.785902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.785968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.786231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.786296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.786535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.786600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.786837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.786904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.787168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.787233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.787442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.757 [2024-07-15 13:04:22.787507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.757 qpair failed and we were unable to recover it. 00:25:04.757 [2024-07-15 13:04:22.787686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.787765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.788098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.788162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.788495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.788560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.788823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.788898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.789163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.789228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.789573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.789646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.789936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.790003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.790290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.790350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.790680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.790787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.791041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.791106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.791304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.791368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.791591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.791662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.791916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.792203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.792267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.792489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.792553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.792934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.793001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.793255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.793320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.793606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.793672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.793984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.794061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.794377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.794440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.794707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.794784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.795054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.795118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.795351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.795416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.795666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.795731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.795985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.796048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.796421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.796486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.796754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.796820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.797105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.797169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.797459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.797523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.797715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.797815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.758 qpair failed and we were unable to recover it. 00:25:04.758 [2024-07-15 13:04:22.798039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.758 [2024-07-15 13:04:22.798112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.798374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.798440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.798697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.798779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.799057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.799122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.799444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.799516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.799760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.799825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.800137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.800201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.800523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.800598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.800845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.800911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.801239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.801304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.801629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.801695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.801939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.802003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.802177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.802241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.802462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.802527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.802865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.802930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.803318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.803383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.803642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.803707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.803984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.804287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.804352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.804625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.804689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.804907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.804972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.805222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.805287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.805531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.805596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.805857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.805923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.806185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.806250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.806573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.806641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.806970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.807035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.807334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.807560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.807626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.807902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.807967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.808258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.808323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.808654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.808719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.809008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.809073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.809400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.809467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.809708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.809789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.810084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.810149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.810333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.810393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.759 qpair failed and we were unable to recover it. 00:25:04.759 [2024-07-15 13:04:22.810622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.759 [2024-07-15 13:04:22.810687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.810988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.811052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.811288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.811352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.811615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.811689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.811924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.812001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.812363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.812427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.812647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.812712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.813003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.813067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.813414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.813479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.813724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.813821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.814164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.814229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.814604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.814679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.815051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.815118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.815440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.815510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.815783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.815850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.816102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.816166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.816505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.816576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.816854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.816920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.817122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.817196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.817453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.817518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.817817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.817883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.818213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.818282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.818580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.818644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.818875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.818940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.819207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.819272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.819554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.819618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.819849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.819915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.820258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.820334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.820548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.820613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.820840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.820904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.821199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.821264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.821532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.821597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.821814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.821899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.822231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.822296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.822632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.822696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.822978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.823043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.823395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.823460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.823702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.760 [2024-07-15 13:04:22.823797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.760 qpair failed and we were unable to recover it. 00:25:04.760 [2024-07-15 13:04:22.824056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.824121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.824466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.824530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.824769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.824835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.825159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.825230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.825434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.825499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.825735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.825824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.826125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.826190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.826435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.826499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.826887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.826963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.827193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.827257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.827519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.827584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.827881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.827945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.828187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.828263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.828516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.828580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.828835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.828900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.829206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.829271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.829487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.829848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.829923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.830167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.830232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.830504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.830570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.830803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.830869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.831108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.831173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.831537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.831602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.831840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.831904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.832143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.832207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.832488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.832553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.832774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.833069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.833134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.833388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.833452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.833726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.833812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.834034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.834099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.834330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.834393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.834676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.834758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.835032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.835097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.835377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.835441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.835630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.835695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.835955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.836021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.836213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.836277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.836494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.836566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.836802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.836868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.761 qpair failed and we were unable to recover it. 00:25:04.761 [2024-07-15 13:04:22.837197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.761 [2024-07-15 13:04:22.837262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.837479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.837543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.837784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.837850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.838198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.838262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.838464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.838528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.838762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.838847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.839117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.839181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.839536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.839600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.839927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.839992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.840199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.840273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.840541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.840605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.840959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.841024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.841350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.841414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.841808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.841874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.842096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.842161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.842515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.842583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.842816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.842882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.843118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.843183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.843434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.843507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.843783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.843849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.844084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.844148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.844464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.844535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.844752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.844819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.845032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.845095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.845325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.845389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.845688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.845770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.846076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.846140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.846399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.846463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.846704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.846786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.847122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.847192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.847477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.847541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.847767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.847843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.848122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.848187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.848531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.848598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.848893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.848959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.762 qpair failed and we were unable to recover it. 00:25:04.762 [2024-07-15 13:04:22.849203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.762 [2024-07-15 13:04:22.849267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.849544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.849608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.849809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.849875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.850108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.850174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.850426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.850490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.850730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.850809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.851157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.851227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.851460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.851524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.851765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.851830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.852159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.852224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.852506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.852578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.852801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.852869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.853102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.853168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.853367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.853431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.853765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.853830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.854056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.854121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.854354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.854419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.854803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.854868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.855192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.855257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.855602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.855666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.855920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.855986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.856242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.856306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.856592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.856656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.856918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.856984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.857191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.857256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.857519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.857584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.857830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.857896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.858219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.858284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.858486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.858551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.858811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.858878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.859076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.859140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.859346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.859420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.859674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.859752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.859956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.860020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.860253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.860317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.860560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.860625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.860830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.860896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.861139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.861204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.861451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.861515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.861809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.861876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.862113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.763 [2024-07-15 13:04:22.862178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.763 qpair failed and we were unable to recover it. 00:25:04.763 [2024-07-15 13:04:22.862427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.862491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.862853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.862919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.863128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.863193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.863432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.863498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.863775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.863842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.864035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.864101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.864312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.864377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.864562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.864626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.864852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.864919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.865164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.865239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.865449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.865513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.865725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.865805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.866021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.866085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.866360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.866423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.866644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.866708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.866912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.866977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.867205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.867269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.867536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.867599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.867837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.867872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.868013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.868047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.868305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.868339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.868525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.868585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.868764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.868815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.868941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.868975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.869143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.869179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.869333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.869368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.869546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.869593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.869762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.869813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.869957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.869991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.870157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.870370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.870406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.870555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.870696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.870731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.871029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.871083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.871275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.871319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.871496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.871531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.871721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.871789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.871933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.871967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.872125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.872189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.872434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.872468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.872622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.764 [2024-07-15 13:04:22.872656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.764 qpair failed and we were unable to recover it. 00:25:04.764 [2024-07-15 13:04:22.872885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.872921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.873098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.873132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.873324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.873357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.873536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.873569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.873679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.873713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.873829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.873863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.874050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.874124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.874366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.874431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.874766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.874838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.875008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.875044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.875245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.875309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.875539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.875603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.875816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.875850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.875968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.876140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.876357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.876581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.876732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.876915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.876949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.877062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.877241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.877425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.877489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.877718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.877802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.877972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.878157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.878203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.878446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.878481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.878744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.878796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.878974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.879008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.879152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.879186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.879361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.879406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.879538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.879582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.879817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.879863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.880042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.880076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.880237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.880301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.880483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.880544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.880773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.880824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.880967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.881002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.881126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.881159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.881321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.881355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.881530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.881594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.765 [2024-07-15 13:04:22.881801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.765 [2024-07-15 13:04:22.881836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.765 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.881988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.882033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.882197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.882261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.882496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.882561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.882831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.882865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.883039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.883073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.883435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.883498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.883843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.883878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.884023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.884062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.884214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.884248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.884456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.884490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.884637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.884670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.884840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.884874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.885020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.885055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.885241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.885275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.885439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.885472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.885668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.885702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.885887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.885921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.886068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.886102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.886253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.886287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.886503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.886543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.886693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.886727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.886858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.886892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.887143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.887177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.887403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.887436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.887587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.887621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.887769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.887804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.887923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.887957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.888110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.888144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.888324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.888357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.888506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.888539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.888697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1babae0 is same with the state(5) to be set 00:25:04.766 [2024-07-15 13:04:22.888935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.889170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.889206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.889319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.889556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.889602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.889791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.889825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.889976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.890010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.890207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.890241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.890423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.890457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.890571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.890604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.766 [2024-07-15 13:04:22.890769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.766 [2024-07-15 13:04:22.890822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.766 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.890974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.891202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.891384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.891558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.891712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.891908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.891942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.892102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.892136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.892275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.892309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.892445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.892479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.892713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.892757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.892907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.892941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.893111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.893145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.893345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.893378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.893551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.893585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.893747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.893781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.893895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.893929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.894098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.894131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.894275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.894309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.894528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.894562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.894671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.894704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.894858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.894898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.895081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.895115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.895299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.895332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.895475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.895507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.895707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.895761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.895893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.895925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.896128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.896339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.896510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.896727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.896887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.896992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.897215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.897256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.897434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.897498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.897664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.767 [2024-07-15 13:04:22.897700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.767 qpair failed and we were unable to recover it. 00:25:04.767 [2024-07-15 13:04:22.897853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.897885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.897988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.898146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.898392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.898572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.898755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.898945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.898979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.899166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.899219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.899373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.899409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.899596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.899629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.899773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.899807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.899929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.899962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.900141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.900175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.900342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.900378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.900504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.900538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.900733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.900773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.900891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.900925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.901127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.901172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.901353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.901429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.901590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.901649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.901861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.901895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.902066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.902145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.902349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.902409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.902613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.902672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.902889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.902924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.903045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.903088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.903198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.903232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.903346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.903407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.903614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.903673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.903884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.903919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.904138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.904175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.904346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.904380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.904588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.904649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.904824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.904859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.904999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.905033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.905207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.905242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.905355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.905388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.905559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.905619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.905811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.905846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.905998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.906032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.768 qpair failed and we were unable to recover it. 00:25:04.768 [2024-07-15 13:04:22.906174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.768 [2024-07-15 13:04:22.906208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.906358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.906391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.906578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.906638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.906868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.906903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.907042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.907076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.907222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.907256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.907376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.907409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.907625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.907684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.907904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.907938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.908055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.908089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.908239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.908310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.908512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.908567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.908802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.908838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.908975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.909009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.909159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.909193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.909366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.909422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.909583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.909649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.909857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.909891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.910038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.910072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.910258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.910313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.910510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.910575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.910786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.910845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.910963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.910997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.911144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.911178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.911350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.911394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.911521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.911560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.911734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.911774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.911913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.911948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.912140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.912174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.912316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.912350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.912496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.912560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.912788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.912822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.912966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.912999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.913168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.913201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.913370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.913425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.769 qpair failed and we were unable to recover it. 00:25:04.769 [2024-07-15 13:04:22.913638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.769 [2024-07-15 13:04:22.913693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.913875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.913908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.914052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.914085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.914231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.914265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.914426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.914496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.914707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.914787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.914935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.914969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.915107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.915141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.915375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.915430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.915637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.915692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.915905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.915940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.916084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.916118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.916239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.916274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.916440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.916505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.916694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.916766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.916915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.916950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.917119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.917170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.917368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.917424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.917607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.917666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.917858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.917893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.918920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.918954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.919100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.919155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.919386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.919441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.919631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.919685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.919858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.919892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.920018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.920057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.920216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.920271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.920479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.920534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.920754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.920812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.920930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.920964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.921191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.921254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.921443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.921498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.921684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.921752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.921899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.921933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.922140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.770 [2024-07-15 13:04:22.922174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.770 qpair failed and we were unable to recover it. 00:25:04.770 [2024-07-15 13:04:22.922349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.922384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.922512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.922546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.922660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.922694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.922821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.922856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.922987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.923021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.923216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.923250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.923373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.923430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.923589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.923645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.923869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.923904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.924914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.924948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.925072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.925105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.925313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.925365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.925623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.925676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.925852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.925887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.926958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.926992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.927170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.927206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.927333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.927392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.927584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.927618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.927783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.927818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.927941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.927976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.928115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.928154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.928266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.928301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.928465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.928528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.928754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.928811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.928926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.771 [2024-07-15 13:04:22.928960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.771 qpair failed and we were unable to recover it. 00:25:04.771 [2024-07-15 13:04:22.929155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.929201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.929357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.929415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.929599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.929651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.929831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.929866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.929987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.930021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.930207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.930258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.930427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.930477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.930621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.930655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.930808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.930843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.930975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.931009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.931214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.931248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.931416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.931452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.931611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.931664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.931845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.931880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.931999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.932034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.932177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.932211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.932383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.932417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.932605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.932657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.932831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.932866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.932980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.933193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.933377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.933558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.933770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.933928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.933962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.934149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.934201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.934378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.934434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.934630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.934683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:04.772 [2024-07-15 13:04:22.934876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:04.772 [2024-07-15 13:04:22.934911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:04.772 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.935951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.935986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.936184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.936226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.937540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.937572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.937764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.937805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.937918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.937945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.938908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.938935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.939956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.939982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.053 [2024-07-15 13:04:22.940104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.053 [2024-07-15 13:04:22.940131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.053 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.940229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.940255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.940935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.940965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.941936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.941963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.942080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.942106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.942237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.942270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.942420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.942489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.942672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.942705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.942849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.942899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.943909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.943957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.944130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.944380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.944544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.944699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.944878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.944992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.945883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.945976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.946852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.946974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.947024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.947139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.947173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.947425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.947452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.947585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.947611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.054 qpair failed and we were unable to recover it. 00:25:05.054 [2024-07-15 13:04:22.947783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.054 [2024-07-15 13:04:22.947815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.947942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.947969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.948877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.948986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.949881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.949907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.950898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.950928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.951918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.951943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.952042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.952067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.953927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.953955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.954059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.954086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.954224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.954250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.954373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.954400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.055 [2024-07-15 13:04:22.954507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.055 [2024-07-15 13:04:22.954534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.055 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.954637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.954664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.954768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.954796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.954894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.954921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.955922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.955950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.956083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.956109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.956216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.956243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.956377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.956402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.956497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.956523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.956657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.957376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.957407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.957559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.957606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.958265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.958296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.958421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.958474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.959162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.959402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.959540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.959696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.959873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.959980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.960891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.960995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.961871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.961899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.962001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.962029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.962123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.962148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.056 qpair failed and we were unable to recover it. 00:25:05.056 [2024-07-15 13:04:22.962254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.056 [2024-07-15 13:04:22.962281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.962392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.962418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.962549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.962702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.962729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.962845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.962871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.962971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.962997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.963888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.963985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.964865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.964891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.965921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.965947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.966901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.966947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.967048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.967074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.967197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.967224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.967360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.057 [2024-07-15 13:04:22.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:05.057 qpair failed and we were unable to recover it. 00:25:05.057 [2024-07-15 13:04:22.967526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.967554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.967655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.967681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.967792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.967818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.967935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.967969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.968940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.968966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.969937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.969963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.970115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.970161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.970273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.970306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.970489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.970537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.970676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.970709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.970848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.970875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.971967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.971993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.972940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.972969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.973133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.973182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.973352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.973401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.973599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.973632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.973771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.973797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.973902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.973928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.974033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.974077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.974253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.974291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.058 [2024-07-15 13:04:22.974445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.058 [2024-07-15 13:04:22.974479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.058 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.974603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.974628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.974749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.974775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.974885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.974910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.975882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.975913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.976117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.976330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.976516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.976664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.976854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.976989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.977186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.977363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.977599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.977762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.977890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.977915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.978882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.978990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.979880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.979989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.059 qpair failed and we were unable to recover it. 00:25:05.059 [2024-07-15 13:04:22.980857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.059 [2024-07-15 13:04:22.980882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.980985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.981876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.981976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.982899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.982924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.983895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.983921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.984899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.984925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.985971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.985997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.986169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.986324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.986503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.986654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.060 [2024-07-15 13:04:22.986775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.060 qpair failed and we were unable to recover it. 00:25:05.060 [2024-07-15 13:04:22.986895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.986921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.987894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.987998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.988921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.988946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.989949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.989975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.990856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.990987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.991153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.991299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.991429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.991598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.061 [2024-07-15 13:04:22.991777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.061 [2024-07-15 13:04:22.991804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.061 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.991905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.991930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.992965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.992990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.993881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.993907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.994901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.994927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.995966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.995991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.996145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.996169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.062 [2024-07-15 13:04:22.996284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.062 [2024-07-15 13:04:22.996308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.062 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.996428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.996452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.996646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.996671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.996801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.996827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.996936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.996962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.997942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.997967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.998164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.998190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.998351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.998376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.998534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.998559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.998695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.998729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.998868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.998908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:22.999963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:22.999989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.000175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.000199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.000489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.000514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.000699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.000724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.000829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.000854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.000980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.001195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.001365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.001556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.001747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.001898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.001924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.002039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.002064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.002202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.002227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.002462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.002492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.002623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.002648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.063 qpair failed and we were unable to recover it. 00:25:05.063 [2024-07-15 13:04:23.002776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.063 [2024-07-15 13:04:23.002803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.002919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.002944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.003972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.003998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.004234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.004259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.004436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.004461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.004624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.004649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.004831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.004857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.004962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.005857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.005994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.006861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.006996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.007022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.007183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.007216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.007428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.007468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.007639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.007673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.007823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.007858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.008020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.008053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.008289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.008329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.008533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.008574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.008755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.008781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.008886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.008912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.009011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.009036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.009161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.064 [2024-07-15 13:04:23.009187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.064 qpair failed and we were unable to recover it. 00:25:05.064 [2024-07-15 13:04:23.009405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.009431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.009571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.009597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.009713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.009743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.009863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.009890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.010990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.011016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.011146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.011173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.011486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.011513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.011708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.011735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.011864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.011891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.012888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.012915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.013888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.013914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.014012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.014043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.014182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.014208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.014403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.014429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.065 qpair failed and we were unable to recover it. 00:25:05.065 [2024-07-15 13:04:23.014575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.065 [2024-07-15 13:04:23.014603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.014771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.014799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.014954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.014980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.015146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.015173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.015325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.015351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.015467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.015495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.015645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.015685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.015846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.015875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.016029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.016284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.016442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.016658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1baeea0 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.016876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.016989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.017898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.017992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.018223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.018389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.018579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.018762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.018891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.018918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.019906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.019934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.020952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.020979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.066 qpair failed and we were unable to recover it. 00:25:05.066 [2024-07-15 13:04:23.021145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.066 [2024-07-15 13:04:23.021172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.021318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.021346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.021468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.021495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.021610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.021642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.021800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.021829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.021959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.021987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.022141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.022168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.022335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.022362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.022583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.022619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.022774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.022802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.022939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.022968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.023127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.023280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.023475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.023613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.023843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.023974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.024944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.024972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.025154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.025182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.025346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.025373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.025553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.025581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.025735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.025769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.025903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.067 [2024-07-15 13:04:23.025931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.067 qpair failed and we were unable to recover it. 00:25:05.067 [2024-07-15 13:04:23.026038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.026067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.026257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.026286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.026385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.026414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.026555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.026583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.026816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.026846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.027006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.027034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.027219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.027252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.027389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.027418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.027606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.027635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.027779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.027808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.028928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.028957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.029138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.029167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.029332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.029361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.029549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.029578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.029721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.029756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.029904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.029933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.030105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.030309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.030512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.030695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.030864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.030983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.031188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.031394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.031592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.031782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.031944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.031973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.032933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.032963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.033151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.068 [2024-07-15 13:04:23.033181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.068 qpair failed and we were unable to recover it. 00:25:05.068 [2024-07-15 13:04:23.033363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.033393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.033539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.033568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.033735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.033773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.033880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.033910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.034934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.034964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.035149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.035179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.035330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.035359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.035533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.035563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.035695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.035725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.035864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.035894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.036055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.036085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.036224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.036254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.036408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.036437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.036652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.036682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.036832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.036863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.037941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.037977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.038142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.038172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.038369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.038399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.038583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.038617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.038773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.038813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.039842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.039871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.040023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.040053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.040396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.040426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.040553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.069 [2024-07-15 13:04:23.040584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.069 qpair failed and we were unable to recover it. 00:25:05.069 [2024-07-15 13:04:23.040744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.040776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.040923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.040953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.041124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.041155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.041417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.041455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.041629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.041670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.041913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.041952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.042074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.042109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.042242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.042273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.042524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.042568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.042781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.042825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.043938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.043969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.044130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.044160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.044339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.044370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.044586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.044624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.044778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.044820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.045039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.045077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.045253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.045285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.045475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.045506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.045698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.045729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.045860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.045892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.046043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.046073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.046200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.046231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.046376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.046407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.046572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.046602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.046763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.046806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.047079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.047224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.047427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.047622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.047793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.047976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.048160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.048327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.048547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.048795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.070 [2024-07-15 13:04:23.048942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.070 [2024-07-15 13:04:23.048975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.070 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.049093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.049125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.049278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.049310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.049488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.049521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.049659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.049691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.049879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.049913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.050096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.050129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.050242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.050274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.050399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.050436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.050699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.050742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.050913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.050946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.051120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.051153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.051288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.051320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.051512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.051710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.051749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.051966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.052005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.052207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.052240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.052409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.052442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.052628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.052660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.052798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.052831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.053024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.053057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.053199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.053238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.053394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.053427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.053594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.053629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.053795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.053828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.054047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.054086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.054256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.054289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.054485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.054523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.054687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.054719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.054865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.054898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.055023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.055056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.055207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.055239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.055437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.055470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.055664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.055697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.071 [2024-07-15 13:04:23.055851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.071 [2024-07-15 13:04:23.055884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.071 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.056911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.056944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.057151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.057183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.057328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.057361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.057568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.057601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.057746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.057780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.057927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.057960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.058190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.058227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.058467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.058501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.058635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.058674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.058830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.058864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.059059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.059251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.059498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.059650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.059832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.059983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.060016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.060142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.060176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.060338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.060527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.060560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.060744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.060779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.061019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.061058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.061378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.061412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.061598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.061631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.061750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.061785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.061942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.061976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.062144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.062178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.062361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.062395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.062560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.062604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.062826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.062868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.063086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.063125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.063258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.063291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.063491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.063535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.063774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.063817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.063938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.063977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.064131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.064165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.072 qpair failed and we were unable to recover it. 00:25:05.072 [2024-07-15 13:04:23.064369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.072 [2024-07-15 13:04:23.064413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.064565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.064599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.064834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.064869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.065051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.065085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.065271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.065305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.065457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.065491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.065708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.065751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.065898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.065932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.066083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.066117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.066341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.066374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.066518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.066552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.066730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.066788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.066999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.067039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.067189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.067223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.067441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.067480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.067638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.067673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.067829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.067864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.068091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.068130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.068307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.068340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.068528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.068563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.068733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.068797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.068947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.068996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.069143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.069192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.069314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.069357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.069515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.069548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.069735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.069774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.069972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.070028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.070239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.070290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.070445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.070499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.070631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.070665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.070899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.070957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.071189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.071243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.071445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.071506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.071620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.071654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.071864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.071923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.072158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.072210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.072347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.072396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.072575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.072608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.072723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.072765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.072964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.073019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.073181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.073 [2024-07-15 13:04:23.073240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.073 qpair failed and we were unable to recover it. 00:25:05.073 [2024-07-15 13:04:23.073401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.073458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.073583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.073617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.073853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.073892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.074089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.074139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.074291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.074325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.074557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.074591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.074723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.074763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.074915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.074948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.075080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.075114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.075237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.075271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.075469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.075502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.075645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.075678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.075856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.075900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.076087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.076122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.076255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.076464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.076505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.076630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.076664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.076817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.076868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.077083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.077138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.077384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.077435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.077585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.077625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.077846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.077897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.078092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.078141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.078322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.078380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.078555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.078589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.078810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.078861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.079081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.079131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.079297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.079347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.079474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.079507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.079720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.079773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.079963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.079999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.080197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.080231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.080382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.080433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.080600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.080636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.080778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.080812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.081010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.081060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.081209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.081262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.081564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.081597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.081858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.081896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.082076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.082131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.074 qpair failed and we were unable to recover it. 00:25:05.074 [2024-07-15 13:04:23.082319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.074 [2024-07-15 13:04:23.082369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.082531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.082565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.082701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.082765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.082936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.082986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.083184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.083235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.083391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.083442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.083660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.083701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.083866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.083900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.084106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.084143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.084320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.084373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.084507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.084541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.084745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.084780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.084934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.084990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.085161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.085215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.085373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.085407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.085721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.085765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.086001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.086042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.086156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.086190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.086438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.086489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.086918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.086953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.087109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.087161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.087369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.087423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.087647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.087684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.087871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.087906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.088060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.088111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.088298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.088353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.088518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.088552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.088708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.088747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.088945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.088989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.089204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.089260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.089446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.089497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.089651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.089689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.089877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.089932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.090062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.090099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.090263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.090314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.075 [2024-07-15 13:04:23.090550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.075 [2024-07-15 13:04:23.090590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.075 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.090773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.090820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.090980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.091031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.091155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.091192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.091411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.091476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.091670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.091710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.091869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.091920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.092113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.092163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.092318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.092375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.092546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.092587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.092767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.092808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.092957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.093129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.093321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.093555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.093729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.093908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.093960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.094111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.094146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.094369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.094410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.094588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.094622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.094773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.094807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.094983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.095036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.095242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.095296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.095425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.095458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.095694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.095728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.095997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.096049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.096219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.096279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.096453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.096503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.096653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.096687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.096832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.096866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.097025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.097058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.097198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.097232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.097376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.097409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.097586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.097619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.097879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.097921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.098088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.098122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.098227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.098260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.098434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.098469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.098667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.098700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.098890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.098949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.076 [2024-07-15 13:04:23.099117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.076 [2024-07-15 13:04:23.099171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.076 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.099300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.099352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.099514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.099547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.099696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.099730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.099930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.099987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.100133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.100166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.100298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.100333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.100588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.100630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.100793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.100827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.101025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.101060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.101206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.101240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.101384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.101435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.101696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.101729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.101916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.101970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.102141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.102198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.102377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.102411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.102521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.102554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.102767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.102801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.103015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.103067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.103209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.103259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.103436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.103487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.103638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.103672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.103821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.103873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.104071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.104270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.104443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.104642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.104832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.104972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.105006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.105250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.105291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.105513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.105547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.105763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.105797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.105946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.106001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.106214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.106270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.106427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.106478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.106715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.106754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.107075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.107126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.107271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.107472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.107523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.107679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.107719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.107883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.107917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.077 qpair failed and we were unable to recover it. 00:25:05.077 [2024-07-15 13:04:23.108119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.077 [2024-07-15 13:04:23.108160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.108380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.108435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.108636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.108678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.108832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.108888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.109090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.109131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.109283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.109334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.109489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.109529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.109682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.109725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.109931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.109995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.110203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.110242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.110397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.110438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.110591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.110624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.110762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.110806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.111050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.111105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.111250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.111305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.111451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.111484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.111651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.111685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.111916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.111968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.112201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.112252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.112410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.112469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.112636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.112670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.112873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.112925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.113125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.113184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.113387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.113441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.113615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.113648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.113851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.113915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.114090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.114141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.114291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.114343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.114503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.114548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.114670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.114704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.114883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.114935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.115154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.115205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.115398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.115452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.115655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.115696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.115826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.115868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.116033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.116085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.116274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.116326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.116482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.116516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.116682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.116716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.116883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.116936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.117068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.078 [2024-07-15 13:04:23.117120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.078 qpair failed and we were unable to recover it. 00:25:05.078 [2024-07-15 13:04:23.117317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.117358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.117506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.117546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.117702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.117760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.117914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.117948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.118078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.118111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.118292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.118326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.118429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.118463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.118597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.118630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.118886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.118921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.119143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.119202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.119386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.119438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.119592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.119626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.119764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.119799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.119963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.120016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.120183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.120238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.120391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.120425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.120578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.120612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.120760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.120795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.120966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.121017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.121160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.121223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.121380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.121414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.121542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.121576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.121721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.121798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.121983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.122040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.122220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.122270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.122381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.122609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.122643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.122812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.122866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.122991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.123192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.123348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.123580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.123746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.123918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.123951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.124104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.124138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.124317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.124351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.124550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.124584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.124760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.124795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.124947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.124998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.125158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.125210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.125388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.079 [2024-07-15 13:04:23.125421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.079 qpair failed and we were unable to recover it. 00:25:05.079 [2024-07-15 13:04:23.125533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.125566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.125717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.125764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.125922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.125983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.126188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.126245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.126461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.126512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.126713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.126754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.126968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.127025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.127198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.127249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.127422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.127474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.127641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.127679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.127902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.127962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.128211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.128262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.128388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.128443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.128597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.128630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.128780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.128815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.129038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.129094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.129222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.129275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.129473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.129512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.129664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.129697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.129873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.129929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.130090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.130152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.130291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.130343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.130556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.130595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.130710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.130749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.131012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.131051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.131265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.131325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.131470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.131504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.131721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.131761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.131929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.131978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.132111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.132169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.132393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.132444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.132689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.132722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.133064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.133116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.133320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.133373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.133488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.080 [2024-07-15 13:04:23.133551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.080 qpair failed and we were unable to recover it. 00:25:05.080 [2024-07-15 13:04:23.133688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.133721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.133892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.133946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.134121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.134174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.134328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.134387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.134592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.134633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.134784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.134818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.135048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.135091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.135290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.135324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.135450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.135483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.135687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.135720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.135894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.135947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.136183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.136236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.136424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.136477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.136600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.136634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.136789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.136845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.137009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.137064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.137195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.137250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.137400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.137434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.137609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.137642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.137839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.137874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.138063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.138122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.138272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.138324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.138454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.138489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.138699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.138732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.138894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.138946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.139098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.139149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.139305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.139357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.139479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.139513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.139690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.139724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.139924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.139958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.140155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.140189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.140312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.140346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.140495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.140529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.140814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.140868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.141074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.141129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.141273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.141325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.141527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.141580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.141735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.141788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.141941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.141993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.142115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.142169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.081 [2024-07-15 13:04:23.142342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.081 [2024-07-15 13:04:23.142398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.081 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.142608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.142642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.142758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.142792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.143040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.143080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.143266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.143317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.143427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.143461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.143650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.143689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.143877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.143937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.144118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.144170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.144300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.144358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.144569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.144603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.144816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.144875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.145079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.145128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.145300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.145352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.145498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.145532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.145744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.145790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.145935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.145986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.146135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.146189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.146369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.146406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.146562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.146599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.146780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.146825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.147010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.147044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.147160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.147194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.147345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.147379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.147583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.147616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.147765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.147800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.148009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.148065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.148237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.148288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.148468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.148502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.148638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.148672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.148934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.148991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.149168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.149220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.149374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.149426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.149601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.149635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.149894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.149938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.150148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.150207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.150387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.150440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.150593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.150627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.150775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.150809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.150969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.151022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.151227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.151279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.082 qpair failed and we were unable to recover it. 00:25:05.082 [2024-07-15 13:04:23.151430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.082 [2024-07-15 13:04:23.151463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.151680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.151714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.151917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.151971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.152179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.152231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.152391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.152703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.152763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.153085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.153136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.153342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.153396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.153553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.153587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.153853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.153904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.154099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.154151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.154425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.154459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.154672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.154712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.154884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.154948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.155104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.155161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.155367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.155401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.155548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.155582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.155729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.155786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.155919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.155973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.156155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.156209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.156382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.156416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.156556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.156590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.156761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.156797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.156943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.156978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.157172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.157234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.157398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.157432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.157611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.157645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.157801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.157858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.157999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.158053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.158196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.158258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.158385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.158419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.158661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.158695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.158879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.159069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.159122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.159314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.159451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.159485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.159688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.159722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.159906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.159961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.160088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.160144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.160319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.160380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.160539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.083 [2024-07-15 13:04:23.160584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.083 qpair failed and we were unable to recover it. 00:25:05.083 [2024-07-15 13:04:23.160730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.160778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.160962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.161014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.161194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.161254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.161430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.161464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.161608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.161647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.161806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.161863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.162006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.162060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.162216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.162276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.162501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.162542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.162820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.162873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.163025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.163076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.163232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.163276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.163490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.163523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.163672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.163714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.163879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.163932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.164052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.164109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.164256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.164290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.164433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.164467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.164687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.164731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.164875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.164910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.165132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.165176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.165339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.165373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.165584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.165630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.165802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.165866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.166017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.166076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.166279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.166331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.166485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.166518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.166680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.166713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.166855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.166910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.167052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.167086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.167239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.167273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.167443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.167478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.167683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.167717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.167863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.167897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.168037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.168072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.084 [2024-07-15 13:04:23.168253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.084 [2024-07-15 13:04:23.168287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.084 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.168406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.168439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.168625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.168658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.168802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.168848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.169020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.169082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.169189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.169223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.169377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.169411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.169565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.169599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.169812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.169847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.170020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.170059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.170210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.170244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.170456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.170490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.170661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.170696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.170846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.170880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.171022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.171056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.171229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.171263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.171454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.171487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.171641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.171675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.171870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.171924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.172079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.172130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.172300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.172334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.172452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.172486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.172698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.172733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.172915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.172949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.173205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.173256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.173448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.173499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.173608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.173642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.173807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.173865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.174049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.174102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.174256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.174315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.174433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.174468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.174643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.174677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.174808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.174861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.175034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.175068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.175242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.175294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.175445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.175479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.175640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.175674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.175824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.175859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.176013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.176047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.176216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.176250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.176384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.176418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.085 qpair failed and we were unable to recover it. 00:25:05.085 [2024-07-15 13:04:23.176551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.085 [2024-07-15 13:04:23.176585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.176791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.176826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.176942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.176975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.177118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.177151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.177298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.177332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.177545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.177578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.177713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.177753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.177914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.177948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.178064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.178106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.178231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.178264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.178453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.178487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.178626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.178660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.178870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.178932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.179168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.179221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.179370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.179404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.179551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.179585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.179721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.179772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.180007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.180060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.180259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.180317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.180472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.180506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.180654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.180687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.180828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.180862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.181917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.181951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.182163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.182346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.182380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.182527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.182560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.182751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.182786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.182937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.182989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.183128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.183161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.183325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.183359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.183558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.183597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.183751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.183905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.183940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.184108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.184142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.184311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.184345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.184471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.086 [2024-07-15 13:04:23.184505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.086 qpair failed and we were unable to recover it. 00:25:05.086 [2024-07-15 13:04:23.184631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.184664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.184830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.184884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.185040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.185210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.185394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.185649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.185838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.185978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.186172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.186364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.186508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.186725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.186880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.186914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.187083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.187117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.187265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.187299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.187408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.187441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.187578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.187611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.187862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.187896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.188103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.188146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.188266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.188321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.188493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.188527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.188642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.188905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.188939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.189088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.189140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.189335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.189388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.189543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.189583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.189768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.189803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.189954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.190219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.190253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.190397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.190430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.190651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.190684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.190813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.190847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.190994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.191049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.191204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.191238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.191428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.191476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.191657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.191691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.191879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.191933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.192043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.192077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.192237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.192289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.192491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.192534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.087 [2024-07-15 13:04:23.192675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.087 [2024-07-15 13:04:23.192709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.087 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.192880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.192932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.193073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.193126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.193267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.193301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.193502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.193545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.193696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.193730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.193921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.193955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.194081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.194115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.194309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.194343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.194451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.194485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.194674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.194849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.194910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.195150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.195203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.195384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.195438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.195606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.195639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.195811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.195864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.196940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.196993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.197152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.197186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.197332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.197365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.197509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.197542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.197779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.197814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.198017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.198200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.198432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.198489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.198635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.198668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.198822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.198875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.199037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.088 [2024-07-15 13:04:23.199089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.088 qpair failed and we were unable to recover it. 00:25:05.088 [2024-07-15 13:04:23.199279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.199331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.199472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.199687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.199726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.199873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.199926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.200116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.200150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.200355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.200407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.200615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.200652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.200887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.200940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.201136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.201188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.201382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.201442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.201594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.201628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.201794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.201853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.202012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.202065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.202221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.202274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.202430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.202464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.202601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.202634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.202868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.202921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.203072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.203124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.203308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.203361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.203540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.203756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.203811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.203965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.204202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.204403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.204599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.204780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.204962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.204996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.205130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.205164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.205365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.205399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.205553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.205587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.205772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.205806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.205964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.206206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.206260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.206405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.206438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.206555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.206589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.206772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.206807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.206940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.206993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.207167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.207219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.207367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.207401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.207553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.207586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.089 qpair failed and we were unable to recover it. 00:25:05.089 [2024-07-15 13:04:23.207693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.089 [2024-07-15 13:04:23.207726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.207858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.207893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.208130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.208177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.208356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.208389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.208533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.208566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.208742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.208776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.208962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.209015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.209175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.209226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.209379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.209430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.209607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.209640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.209801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.209857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.210012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.210070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.210265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.210507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.210541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.210660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.210694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.210914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.210975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.211181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.211233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.211383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.211435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.211574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.211608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.211750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.211784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.211986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.212021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.212128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.212162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.212334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.212368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.212542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.212576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.212768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.212803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.212961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.213014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.213174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.213226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.213368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.213401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.213527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.213561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.213690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.213732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.213969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.214181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.214368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.214539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.214727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.214880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.214914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.215070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.215103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.215259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.215293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.215480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.215513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.215645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.215679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.215830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.090 [2024-07-15 13:04:23.215864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.090 qpair failed and we were unable to recover it. 00:25:05.090 [2024-07-15 13:04:23.216010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.216185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.216397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.216618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.216792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.216934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.216968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.217140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.217173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.217296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.217330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.217475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.217510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.217657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.217700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.217850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.217884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.218915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.218948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.219089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.219123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.219281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.219325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.219501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.219535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.219688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.219722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.219889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.219923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.220085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.220119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.220323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.220357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.220537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.220571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.220679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.220712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.220873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.220925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.221047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.221102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.221296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.221355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.221542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.221578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.221729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.221803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.221933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.221987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.222171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.222221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.222389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.222423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.222581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.222620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.222769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.222804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.222972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.223027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.223200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.223253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.223373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.223407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.223557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.223590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.091 [2024-07-15 13:04:23.223760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.091 [2024-07-15 13:04:23.223795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.091 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.223988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.224035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.224213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.224265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.224428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.224462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.224609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.224643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.224821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.224874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.225060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.225112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.225300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.225353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.225496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.225530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.225698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.225736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.225910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.225964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.226128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.226183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.226370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.226422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.226575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.226615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.226759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.226793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.226968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.227171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.227378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.227534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.227696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.227903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.227937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.228085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.228119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.228273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.228306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.228413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.228447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.228645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.228679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.228840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.228874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.229036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.229069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.229248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.229281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.229433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.229467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.229675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.229709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.229872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.229925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.230071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.230122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.230279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.230312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.230456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.230489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.230636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.230671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.230843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.230885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.231068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.231121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.231324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.231366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.231542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.231575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.231752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.231805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.231992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.232045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.092 [2024-07-15 13:04:23.232203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.092 [2024-07-15 13:04:23.232269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.092 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.232427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.232461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.232612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.232646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.232800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.232856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.233056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.233243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.233397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.233621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.233868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.233977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.234176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.234360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.234522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.234730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.234909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.234965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.235154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.235193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.235352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.235404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.235575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.235609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.235785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.235820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.235940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.235996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.236134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.236190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.236363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.236396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.236545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.236578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.236700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.236735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.093 [2024-07-15 13:04:23.236864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.093 [2024-07-15 13:04:23.236898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.093 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.237854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.237890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.238875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.238991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.239174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.239379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.239525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.239717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.239883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.239916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.372 [2024-07-15 13:04:23.240040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.372 [2024-07-15 13:04:23.240074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.372 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.240210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.240245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.240387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.240421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.240557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.240590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.240711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.240754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.240898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.240931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.241044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.241078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.241288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.241334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.241520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.241554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.241771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.241806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.241959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.241993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.242138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.242172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.242340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.242374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.242525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.242559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.242777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.242812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.242935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.242990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.243135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.243176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.243365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.243399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.243520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.243553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.243704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.243756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.243923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.243957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.244143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.244203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.244353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.244407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.244541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.244574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.244731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.244772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.244917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.244951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.373 [2024-07-15 13:04:23.245141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.373 [2024-07-15 13:04:23.245175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.373 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.245335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.245369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.245575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.245609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.245786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.245822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.245966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.246218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.246416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.246562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.246754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.246941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.246975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.247192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.247226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.247382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.247421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.247603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.247637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.247777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.247812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.247961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.248014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.248181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.248215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.248392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.248425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.248579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.248613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.248814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.248868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.248983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.249200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.249372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.249558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.249708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.249877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.249911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.250064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.250098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.250225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.374 [2024-07-15 13:04:23.250259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.374 qpair failed and we were unable to recover it. 00:25:05.374 [2024-07-15 13:04:23.250452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.250486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.250750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.250784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.251052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.251225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.251414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.251629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.251847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.251965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.252023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.252172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.252229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.252408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.252442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.252583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.252617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.252792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.252827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.252973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.253006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.253200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.253234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.253429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.253463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.253654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.253688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.253882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.253949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.254145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.254196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.254345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.254402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.254575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.254609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.254788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.254830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.254978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.255032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.255224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.255276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.255418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.255452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.255628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.255667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.375 [2024-07-15 13:04:23.255820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.375 [2024-07-15 13:04:23.255872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.375 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.256030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.256093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.256281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.256334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.256476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.256510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.256636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.256670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.256815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.256868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.257008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.257061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.257230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.257264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.257419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.257462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.257608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.257642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.257796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.257853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.258012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.258070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.258229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.258280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.258425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.258460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.258599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.258633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.258787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.258848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.259006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.259052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.259224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.259257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.259398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.259431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.259605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.259639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.259781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.260031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.260064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.260218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.260271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.260387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.260421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.260591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.260625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.260775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.260810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.376 [2024-07-15 13:04:23.261024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.376 [2024-07-15 13:04:23.261058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.376 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.261242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.261303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.261439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.261478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.261713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.261761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.261995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.262050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.262202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.262253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.262374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.262435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.262626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.262660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.262856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.262909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.263061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.263114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.263282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.263344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.263525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.263564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.263716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.263762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.263992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.264041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.264164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.264209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.264382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.264435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.264638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.264671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.264825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.264879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.265041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.265103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.265287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.265345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.265465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.265499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.265750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.266024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.266078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.266230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.266284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.266423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.266486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.266602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.266636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.266847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.377 [2024-07-15 13:04:23.266905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.377 qpair failed and we were unable to recover it. 00:25:05.377 [2024-07-15 13:04:23.267045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.267087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.267251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.267285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.267437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.267471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.267638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.267672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.267794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.267828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.268034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.268227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.268282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.268430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.268464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.268629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.268663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.268855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.268889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.269036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.269070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.269290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.269333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.269488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.269522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.269695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.269729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.269862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.269896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.270035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.270069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.270184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.270218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.270353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.270411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.270557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.270591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.378 [2024-07-15 13:04:23.270758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.378 [2024-07-15 13:04:23.270792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.378 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.271050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.271101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.271305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.271355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.271555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.271589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.271744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.271778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.271899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.271954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.272102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.272158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.272345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.272384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.272530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.272564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.272674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.272707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.272856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.272890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.273068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.273257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.273469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.273648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.273855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.273984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.274047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.274204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.274256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.274458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.274500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.274680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.274714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.274840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.274896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.275103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.275161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.275324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.275387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.275536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.275569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.275758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.275792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.275942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.275996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.379 [2024-07-15 13:04:23.276162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.379 [2024-07-15 13:04:23.276223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.379 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.276399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.276452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.276562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.276596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.276789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.276846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.277025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.277081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.277245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.277297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.277441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.277474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.277614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.277648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.277838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.277890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.278028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.278082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.278237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.278289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.278403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.278436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.278556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.278589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.278785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.278842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.279036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.279088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.279225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.279258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.279387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.279420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.279645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.279678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.279826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.279882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.280039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.280101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.280257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.280309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.280445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.280483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.280626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.280660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.280839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.280880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.281014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.281048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.281190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.281223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.281402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.281435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.281549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.380 qpair failed and we were unable to recover it. 00:25:05.380 [2024-07-15 13:04:23.281689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.380 [2024-07-15 13:04:23.281723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.281901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.281935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.282101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.282135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.282314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.282348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.282494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.282528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.282754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.282798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.282973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.283022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.283206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.283241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.283363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.283396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.283546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.283580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.283773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.283810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.283996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.284030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.284219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.284271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.284421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.284455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.284607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.284641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.284797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.284853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.284999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.285053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.285198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.285385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.285419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.285552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.285586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.285788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.285828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.285999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.286149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.286323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.286509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.286690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.286913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.286947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.287090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.287124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.287264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.287297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.287457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.287491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.287696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.287884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.287917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.288085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.288119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.288259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.288293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.288417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.288451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.288715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.381 [2024-07-15 13:04:23.288754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.381 qpair failed and we were unable to recover it. 00:25:05.381 [2024-07-15 13:04:23.288911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.288944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.289151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.289185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.289361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.289413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.289546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.289587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.289826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.289861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.289994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.290030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.290200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.290234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.290419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.290460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.290630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.290663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.290839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.290893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.291078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.291130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.291295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.291351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.291469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.291502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.291757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.291792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.292020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.292071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.292277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.292334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.292488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.292522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.292665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.292699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.292863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.292915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.293077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.293326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.293487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.293635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.293864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.293981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.294020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.294236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.294276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.294440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.294473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.294666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.294700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.294935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.294989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.295174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.295226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.295496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.295548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.295728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.295769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.295890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.295947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.296128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.296178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.296393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.296446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.296648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.296687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.296847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.296902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.382 [2024-07-15 13:04:23.297095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.382 [2024-07-15 13:04:23.297151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.382 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.297419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.297648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.297681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.297908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.297968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.298160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.298212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.298329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.298384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.298563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.298607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.298814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.298865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.299019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.299070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.299274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.299330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.299510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.299553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.299773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.299813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.299982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.300038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.300210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.300261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.300420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.300478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.300680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.300713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.300920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.300975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.301173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.301233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.301400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.301453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.301632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.301665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.301929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.302107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.302159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.302340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.302392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.302537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.302717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.302778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.302995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.303047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.303210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.303244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.303394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.303452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.303582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.303627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.303851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.303885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.304036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.304070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.304242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.304277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.304452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.304486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.304697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.304749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.304904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.304956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.305201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.305253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.305488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.305543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.305696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.305745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.305957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.305991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.306210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.306271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.306453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.383 [2024-07-15 13:04:23.306504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.383 qpair failed and we were unable to recover it. 00:25:05.383 [2024-07-15 13:04:23.306666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.306703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.306884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.306938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.307225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.307277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.307501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.307555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.307825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.307882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.308025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.308081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.308272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.308305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.308462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.308525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.308678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.308712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.308871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.308924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.309044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.309078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.309212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.309246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.309422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.309462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.309614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.309648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.309827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.309861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.310001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.310035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.310219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.310258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.310430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.310464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.310592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.310626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.310803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.310838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.311919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.311953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.312077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.312116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.312254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.312288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.312440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.312473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.312629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.312663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.312814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.312867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.313075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.313128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.313316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.313368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.313535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.384 qpair failed and we were unable to recover it. 00:25:05.384 [2024-07-15 13:04:23.313762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.384 [2024-07-15 13:04:23.313816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.313998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.314049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.314191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.314243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.314424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.314465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.314618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.314652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.314830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.314883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.315018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.315079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.315249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.315302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.315455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.315495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.315651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.315689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.315848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.315900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.316059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.316112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.316271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.316324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.316450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.316494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.316634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.316668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.316797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.316832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.317059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.317095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.317251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.317285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.317417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.317451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.317621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.317655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.317847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.317906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.318045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.318098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.318288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.318341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.318516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.318556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.318743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.318778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.318902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.318958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.319132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.319191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.319351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.319404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.319539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.319573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.319759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.319811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.319988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.320021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.320200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.320233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.320439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.320477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.320635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.320669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.320842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.320895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.321045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.321103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.321266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.321320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.321489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.321523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.321701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.321734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.321953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.322005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.322248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.385 [2024-07-15 13:04:23.322303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.385 qpair failed and we were unable to recover it. 00:25:05.385 [2024-07-15 13:04:23.322512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.322565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.322723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.322813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.322974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.323025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.323236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.323290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.323490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.323549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.323706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.323756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.323926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.323979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.324173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.324226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.324505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.324557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.324769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.324804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.324951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.325004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.325145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.325196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.325362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.325414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.325622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.325661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.325901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.325952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.326274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.326329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.326500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.326561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.326815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.326873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.327101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.327153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.327337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.327390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.327556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.327589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.327774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.327809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.327984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.328037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.328209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.328261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.328417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.328453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.328621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.328654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.328827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.328880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.328998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.329053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.329269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.329320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.329471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.329504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.329653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.329687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.329859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.330088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.330140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.330297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.330348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.330520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.330553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.330828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.330881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.331045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.331078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.331253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.331305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.331527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.331579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.331754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.386 [2024-07-15 13:04:23.331817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.386 qpair failed and we were unable to recover it. 00:25:05.386 [2024-07-15 13:04:23.332027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.332077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.332254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.332305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.332557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.332608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.332836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.332889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.333012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.333068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.333223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.333276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.333447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.333480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.333657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.333699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.333920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.333972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.334131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.334183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.334396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.334429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.334678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.334711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.334892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.334953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.335226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.335276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.335508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.335560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.335749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.335784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.335943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.335976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.336222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.336274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.336451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.336504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.336674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.336707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.336962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.337000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.337177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.337231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.337509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.337560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.337805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.337839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.338010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.338067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.338223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.338275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.338428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.338485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.338633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.338674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.338837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.338892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.339037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.339070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.339231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.339264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.339488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.339526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.339695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.339729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.339896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.339930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.340039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.340072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.340247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.340281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.340445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.340478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.340620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.340654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.340833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.340895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.341049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.341110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.341288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.387 [2024-07-15 13:04:23.341341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.387 qpair failed and we were unable to recover it. 00:25:05.387 [2024-07-15 13:04:23.341505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.341539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.341673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.341899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.342131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.342185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.342373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.342425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.342634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.342667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.342856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.342916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.343047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.343086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.343315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.343366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.343564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.343606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.343778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.343813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.343974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.344025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.344183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.344234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.344399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.344450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.344616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.344649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.344838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.344890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.345042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.345093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.345257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.345291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.345432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.345465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.345635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.345669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.345822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.345874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.346018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.346068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.346248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.346299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.346491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.346524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.346720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.346760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.346974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.347033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.347184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.347235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.347380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.347431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.347641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.347675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.347944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.347996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.348265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.348323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.348485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.348535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.348791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.348825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.349043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.349095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.349302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.388 [2024-07-15 13:04:23.349357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.388 qpair failed and we were unable to recover it. 00:25:05.388 [2024-07-15 13:04:23.349481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.349536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.349710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.349758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.349943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.349993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.350195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.350246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.350515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.350566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.350716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.350755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.350932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.350965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.351117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.351156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.351339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.351391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.351657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.351691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.351899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.351933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.352116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.352167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.352344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.352377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.352518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.352551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.352696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.352729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.352880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.352913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.353099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.353133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.353310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.353366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.353624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.353658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.353825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.353878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.354077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.354128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.354252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.354312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.354553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.354586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.354801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.354855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.355025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.355079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.355261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.355320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.355577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.355610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.355855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.355909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.356150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.356183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.356299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.356332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.356535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.356569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.356758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.356793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.357068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.357107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.357327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.357379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.357520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.357555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.357702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.357745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.357954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.358006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.358216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.358278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.358540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.358591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.358853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.358904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.359116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.389 [2024-07-15 13:04:23.359154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.389 qpair failed and we were unable to recover it. 00:25:05.389 [2024-07-15 13:04:23.359421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.359470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.359763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.359797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.359949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.359993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.360245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.360301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.360515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.360567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.360751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.360785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.360944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.361129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.361180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.361436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.361489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.361604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.361636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.361803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.361837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.362052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.362106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.362360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.362413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.362581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.362615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.362874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.362927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.363129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.363180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.363457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.363512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.363715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.363755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.363930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.363982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.364208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.364260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.364517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.364568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.364822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.364879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.365048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.365100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.365262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.365314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.365542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.365594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.365951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.366003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.366268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.366319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.366576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.366627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.366863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.366915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.367176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.367242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.367408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.367463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.367638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.367677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.367831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.367885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.368141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.368205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.368469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.368528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.368805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.368840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.369061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.369119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.369390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.369442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.369703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.369742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.369904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.369938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.390 [2024-07-15 13:04:23.370195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.390 [2024-07-15 13:04:23.370251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.390 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.370443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.370496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.370663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.370697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.371009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.371044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.371274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.371326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.371519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.371570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.371821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.371855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.372120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.372170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.372366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.372419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.372604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.372644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.372795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.372829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.373042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.373095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.373290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.373352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.373583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.373636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.373875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.373917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.374127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.374179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.374336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.374388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.374597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.374630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.374799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.374854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.375058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.375112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.375270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.375320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.375497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.375531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.375682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.375716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.375975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.376030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.376250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.376301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.376466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.376518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.376702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.376735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.377014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.377071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.377212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.377266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.377411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.377469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.377617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.377655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.377832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.377884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.378086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.378139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.378359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.378410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.378657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.378695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.378916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.378969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.379199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.379253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.379524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.379576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.379767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.379820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.380070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.380125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.380306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.391 [2024-07-15 13:04:23.380358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.391 qpair failed and we were unable to recover it. 00:25:05.391 [2024-07-15 13:04:23.380573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.380606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.380858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.380911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.381167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.381222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.381460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.381514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.381681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.381715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.381907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.381959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.382141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.382207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.382410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.382462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.382671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.382705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.382970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.383025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.383278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.383329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.383519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.383580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.383873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.383943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.384201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.384255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.384390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.384442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.384658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.384692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.384962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.385018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.385183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.385233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.385403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.385444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.385672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.385706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.385987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.386045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.386229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.386282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.386489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.386544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.386695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.386730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.386947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.387006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.392 [2024-07-15 13:04:23.387254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.392 [2024-07-15 13:04:23.387307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.392 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.387461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.387515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.387729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.387769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.388026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.388059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.388325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.388378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.388584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.388639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.388790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.388829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.389030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.389086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.389291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.389350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.389543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.389598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.389753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.389787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.389988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.390044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.390312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.390365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.390617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.390651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.390925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.390978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.391240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.391292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.391556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.391610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.391877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.391933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.392159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.392214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.392394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.392454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.392700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.392734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.392978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.393041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.393232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.393284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.393482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.393 [2024-07-15 13:04:23.393535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.393 qpair failed and we were unable to recover it. 00:25:05.393 [2024-07-15 13:04:23.393750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.393784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.394005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.394065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.394270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.394324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.394509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.394564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.394795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.394859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.395130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.395184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.395449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.395502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.395669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.395702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.395932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.395965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.396183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.396237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.396386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.396439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.396620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.396654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.396792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.396825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.397092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.397146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.397380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.397432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.397580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.397614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.397841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.398158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.398212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.398385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.398437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.398659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.398699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.398929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.398979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.399180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.399232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.399403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.399455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.399657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.399689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.399917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.399979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.400250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.400305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.400461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.400515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.400658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.400694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.400926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.400982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.401263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.401326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.401514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.401568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.401796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.401830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.402090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.402144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.402370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.402422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.402556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.394 [2024-07-15 13:04:23.402589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-07-15 13:04:23.402764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.402798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.403022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.403079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.403255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.403308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.403546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.403599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.403867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.403923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.404185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.404237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.404565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.404624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.404790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.404855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.405030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.405080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.405292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.405347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.405601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.405635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.405894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.405949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.406168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.406221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.406362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.406416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.406637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.406671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.406890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.406945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.407177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.407235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.407476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.407540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.407836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.408099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.408153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.408349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.408404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.408651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.408684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.408880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.408919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.409188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.409242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.409417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.409472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.409644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.409678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.409954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.410009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.410211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.410266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.410450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.410484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.410629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.410663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.410881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.410936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.411185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.411219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.411424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.411477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.411645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.411681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.411937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.411988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.412254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.412308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.412513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.412566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.412772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.412807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.413005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.413060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.413266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.413318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-07-15 13:04:23.413469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.395 [2024-07-15 13:04:23.413522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.413755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.413790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.413980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.414021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.414284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.414339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.414610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.414664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.414833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.414867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.415097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.415151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.415388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.415446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.415696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.415729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.415994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.416027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.416221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.416275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.416485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.416802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.416837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.417075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.417127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.417397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.417450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.417593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.417626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.417798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.417838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.418066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.418120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.418318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.418370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.418617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.418651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.418916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.418971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.419159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.419213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.419475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.419527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.419781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.419815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.420063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.420115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.420279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.420333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.420553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.420605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.420862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.420918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.421153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.421207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.421425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.421480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.421692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.421725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.422008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.422068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.422367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.422421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.422646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.422679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.422885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.422924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.423131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.423185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.423446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.423501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.423715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.423766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.423967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.424005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.424124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.424184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.424377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.396 [2024-07-15 13:04:23.424432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.396 qpair failed and we were unable to recover it. 00:25:05.396 [2024-07-15 13:04:23.424570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.424603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.424723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.424765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.425053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.425107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.425336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.425388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.425647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.425681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.425924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.425958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.426156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.426210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.426485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.426539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.426754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.426789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.426952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.426985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.427254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.427308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.427505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.427560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.427736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.427787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.428002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.428036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.428299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.428353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.428623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.428680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.428964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.428998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.429263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.429317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.429593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.429644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.429875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.429911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.430189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.430258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.430528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.430583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.430813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.430849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.431115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.431167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.431447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.431502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.431762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.431796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.432069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.432122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.432356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.432410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.432626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.432679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.432895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.432930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.433197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.433251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.433517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.433573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.433788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.397 [2024-07-15 13:04:23.433822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.397 qpair failed and we were unable to recover it. 00:25:05.397 [2024-07-15 13:04:23.434077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.434131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.434354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.434406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.434671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.434705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.434988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.435043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.435320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.435373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.435631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.435686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.435918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.435953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.436178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.436233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.436500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.436730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.436777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.436988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.437022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.437240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.437293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.437504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.437559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.437820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.437855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.438082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.438134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.438406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.438461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.438679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.438713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.438898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.438932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.439161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.439214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.439405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.439460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.439714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.439762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.439959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.439993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.440207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.440266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.440539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.440594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.440791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.440825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.441099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.441152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.441419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.441473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.441745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.441780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.441939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.441972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.442213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.442268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.442486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.442538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.442733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.442776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.442996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.443030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.443255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.443310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.443546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.443606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.443774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.443808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.444046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.444100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.444367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.444420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.444677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.444711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.444981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.445015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.398 qpair failed and we were unable to recover it. 00:25:05.398 [2024-07-15 13:04:23.445252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.398 [2024-07-15 13:04:23.445307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.445520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.445571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.445824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.445870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.446144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.446198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.446412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.446467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.446675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.446709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.446931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.446966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.447183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.447235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.447454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.447508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.447765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.447800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.448003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.448064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.448286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.448340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.448609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.448661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.448841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.448876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.449137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.449190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.449440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.449492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.449752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.449787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.450025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.450059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.450260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.450312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.450572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.450626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.450893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.450927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.451195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.451249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.451535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.451597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.451867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.451901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.452180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.452236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.452463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.452517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.452727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.452769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.452967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.453000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.453263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.453317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.453584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.453639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.453834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.453868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.454133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.454189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.454452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.454506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.454758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.454793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.455040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.455074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.455291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.455343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.455575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.455627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.455879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.455913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.456098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.456151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.456330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.456384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.399 [2024-07-15 13:04:23.456586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.399 [2024-07-15 13:04:23.456620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.399 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.456867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.456901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.457124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.457178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.457441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.457497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.457757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.457791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.458001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.458056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.458266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.458318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.458538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.458591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.458755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.458789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.459061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.459116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.459386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.459440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.459659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.459693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.459969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.460252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.460305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.460528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.460583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.460874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.460933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.461167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.461222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.461458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.461779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.461812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.462083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.462138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.462420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.462473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.462652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.462685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.462947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.462987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.463245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.463299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.463563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.463614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.463841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.463875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.464140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.464195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.464426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.464482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.464744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.464779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.465008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.465041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.465252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.465307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.465541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.465594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.465821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.465855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.466072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.466133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.466390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.466445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.466669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.466702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.466925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.466959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.467196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.467249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.467513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.467567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.467834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.467868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.468106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.400 [2024-07-15 13:04:23.468163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.400 qpair failed and we were unable to recover it. 00:25:05.400 [2024-07-15 13:04:23.468406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.468459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.468712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.468753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.469013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.469046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.469307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.469361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.469633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.469688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.469883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.469918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.470192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.470247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.470437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.470492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.470707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.470749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.471009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.471043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.471274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.471330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.471562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.471617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.471826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.471860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.472121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.472176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.472431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.472493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.472751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.472786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.472997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.473031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.473242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.473297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.473572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.473624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.473895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.473929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.474154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.474207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.474473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.474537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.474792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.474826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.475105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.475165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.475444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.475498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.475767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.475802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.476059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.476093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.476375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.476691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.476725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.476998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.477032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.477228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.477283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.477510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.477564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.477822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.478136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.478192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.478406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.478460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.478730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.478773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.478980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.479014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.479282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.479338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.401 [2024-07-15 13:04:23.479603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.401 [2024-07-15 13:04:23.479658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.401 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.479918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.479952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.480235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.480288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.480563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.480618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.480877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.480910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.481130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.481184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.481459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.481512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.481731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.481772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.482031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.482064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.482332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.482672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.482728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.482993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.483027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.483284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.483340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.483541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.483600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.483856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.483890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.484165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.484221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.484433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.484486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.484694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.484727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.484990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.485023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.485258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.485314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.485581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.485636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.485887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.485921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.486196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.486250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.486484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.486546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.486799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.486833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.487098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.487153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.487421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.487478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.487730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.487771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.488027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.488061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.488331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.402 [2024-07-15 13:04:23.488394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.402 qpair failed and we were unable to recover it. 00:25:05.402 [2024-07-15 13:04:23.488617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.488671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.488836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.488871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.489144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.489199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.489476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.489531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.489763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.489797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.490052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.490085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.490357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.490414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.490642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.490697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.490903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.490938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.491204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.491263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.491494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.491548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.491792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.491826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.492031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.492092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.492318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.492374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.492648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.492703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.492881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.492916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.493137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.493191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.493423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.493475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.493745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.493779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.494025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.494059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.494293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.494347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.494609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.494663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.494865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.494900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.495168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.495223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.495496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.495553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.495816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.495850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.496076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.496110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.496381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.496438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.496703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.403 [2024-07-15 13:04:23.496743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.403 qpair failed and we were unable to recover it. 00:25:05.403 [2024-07-15 13:04:23.497003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.497037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.497311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.497363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.497600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.497655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.497862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.497896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.498128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.498190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.498456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.498511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.498735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.498777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.499035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.499068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.499306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.499358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.499623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.499676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.499949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.499983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.500191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.500243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.500442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.500506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.500716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.500757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.501026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.501076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.501292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.501344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.501571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.501626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.501802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.501836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.502100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.502155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.502430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.502484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.502749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.502784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.502979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.503013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.503276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.503334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.503611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.503665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.503867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.503901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.504186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.504241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.404 qpair failed and we were unable to recover it. 00:25:05.404 [2024-07-15 13:04:23.504470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.404 [2024-07-15 13:04:23.504525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.504688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.504721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.504984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.505018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.505201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.505256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.505490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.505543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.505805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.505840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.506114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.506177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.506402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.506454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.506653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.506686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.506950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.506983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.507249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.507304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.507575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.507630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.507910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.507944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.508215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.508267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.508484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.508538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.508793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.508827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.508990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.509051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.509249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.509302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.509538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.509598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.509817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.509873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.510134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.510186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.510419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.510472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.510732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.510773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.510986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.511019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.511291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.511344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.511617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.511672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.405 [2024-07-15 13:04:23.511932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.405 [2024-07-15 13:04:23.511966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.405 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.512225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.512278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.512552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.512605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.512819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.512853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.513071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.513125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.513391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.513445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.513650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.513684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.513962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.514021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.514258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.514313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.514569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.514623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.514816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.514850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.515076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.515130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.515397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.515452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.515667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.515700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.515969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.516026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.516299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.516351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.516630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.516683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.516905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.516939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.517210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.517264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.517543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.517598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.517854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.517888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.518124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.518178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.518449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.518502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.518760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.518794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.519055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.519088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.519310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.519364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.519633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.519687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.406 qpair failed and we were unable to recover it. 00:25:05.406 [2024-07-15 13:04:23.519954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.406 [2024-07-15 13:04:23.519988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.520258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.520314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.520588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.520642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.520903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.520937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.521200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.521255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.521494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.521551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.521809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.521843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.522108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.522161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.522431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.522483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.522692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.522726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.522998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.523031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.523203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.523257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.523527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.523581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.523799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.523834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.524102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.524153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.524419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.524472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.524691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.524724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.524999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.525032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.525314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.525369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.525604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.525658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.525909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.525943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.526215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.526268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.526545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.526600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.526829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.526863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.527131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.527183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.527427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.527482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.407 [2024-07-15 13:04:23.527696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.407 [2024-07-15 13:04:23.527729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.407 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.527993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.528027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.528239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.528293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.528558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.528612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.528770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.528803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.529070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.529128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.529359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.529412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.529632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.529666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.529940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.529992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.530268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.530321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.530584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.530639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.530894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.530928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.531201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.531256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.531533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.531585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.531855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.531908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.532171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.532223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.532497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.532552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.532809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.532842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.533117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.533171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.533435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.533513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.408 qpair failed and we were unable to recover it. 00:25:05.408 [2024-07-15 13:04:23.533788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.408 [2024-07-15 13:04:23.533822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.534095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.534148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.534337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.534391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.534668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.534722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.534998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.535031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.535243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.535294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.535569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.535622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.535890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.535924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.536193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.536245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.536506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.536559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.536819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.536853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.537120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.537175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.537395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.537448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.537670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.537704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.538021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.538056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.538321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.538374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.538597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.538650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.538866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.538900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.539150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.539203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.539463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.539517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.539732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.539773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.539977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.540010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.540269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.540322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.540592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.540645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.540907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.540941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.541170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.541225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.541445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.409 [2024-07-15 13:04:23.541497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.409 qpair failed and we were unable to recover it. 00:25:05.409 [2024-07-15 13:04:23.541708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.541748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.542007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.542040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.542308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.542362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.542571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.542622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.542885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.542919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.543186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.543238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.543478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.543533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.543733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.543774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.544038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.544071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.544315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.544367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.544599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.544654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.544858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.544892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.545136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.545195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.545464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.545516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.545772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.545806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.546076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.546127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.546312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.546364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.546588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.546641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.546900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.546934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.547200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.547253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.547471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.547524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.547754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.547788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.548034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.548068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.548274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.548328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.548544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.548597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.548805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.548840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.549075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.410 [2024-07-15 13:04:23.549129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.410 qpair failed and we were unable to recover it. 00:25:05.410 [2024-07-15 13:04:23.549406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.549460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.549667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.549700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.549983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.550017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.550282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.550334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.550610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.550664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.550925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.550959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.551231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.551284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.551508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.551562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.551818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.551852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.552080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.552134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.552335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.552388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.552627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.552889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.552923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.553167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.553221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.553459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.553512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.553722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.553773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.553991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.554290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.554341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.554584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.554637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.554900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.554934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.555207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.555262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.555501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.555553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.555776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.555810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.556123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.556177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.556416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.556468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.556725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.556767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.557008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.411 [2024-07-15 13:04:23.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.411 qpair failed and we were unable to recover it. 00:25:05.411 [2024-07-15 13:04:23.557310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.557363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.412 qpair failed and we were unable to recover it. 00:25:05.412 [2024-07-15 13:04:23.557634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.557689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.412 qpair failed and we were unable to recover it. 00:25:05.412 [2024-07-15 13:04:23.557950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.557985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.412 qpair failed and we were unable to recover it. 00:25:05.412 [2024-07-15 13:04:23.558264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.558320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.412 qpair failed and we were unable to recover it. 00:25:05.412 [2024-07-15 13:04:23.558592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.558645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.412 qpair failed and we were unable to recover it. 00:25:05.412 [2024-07-15 13:04:23.558899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.412 [2024-07-15 13:04:23.558933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.559157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.559211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.559511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.559564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.559772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.559806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.560082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.560115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.560307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.560360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.560589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.560642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.560869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.560903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.561133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.561186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.561422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.691 [2024-07-15 13:04:23.561476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.691 qpair failed and we were unable to recover it. 00:25:05.691 [2024-07-15 13:04:23.561691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.561724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.561965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.561999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.562262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.562319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.562547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.562602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.562781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.562814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.563033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.563093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.563366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.563420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.563644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.563678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.563911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.563967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.564194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.564249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.564538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.564597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.564817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.564873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.565107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.565161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.565374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.565427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.565698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.565730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.566012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.566064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.566335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.566663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.566714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.567002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.567035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.567252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.567304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.567561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.567615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.567882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.567916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.568185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.568237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.568499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.568553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.568826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.568860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.569034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.569085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.569305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.569360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.569626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.569678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.569943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.569976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.570242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.570297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.570535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.570589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.570855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.570889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.571170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.571223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.571494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.571548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.571803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.571837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.572045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.572098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.572327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.572383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.572676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.692 [2024-07-15 13:04:23.572731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.692 qpair failed and we were unable to recover it. 00:25:05.692 [2024-07-15 13:04:23.573011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.573044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.573307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.573366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.573604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.573656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.573871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.573905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.574121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.574176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.574448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.574503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.574716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.574757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.575017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.575050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.575254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.575308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.575590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.575642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.575856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.575890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.576160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.576213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.576492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.576551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.576815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.576848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.577130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.577187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.577462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.577515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.577767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.577800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.578015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.578049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.578315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.578369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.578644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.578696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.578970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.579004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.579275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.579330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.579562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.579617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.579843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.579877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.580145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.580199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.580432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.580486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.580757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.580792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.581061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.581113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.581345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.581400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.581579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.581634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.581862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.581896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.582077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.582130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.582394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.582447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.582666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.582699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.582974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.583027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.583261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.583316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.583578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.583630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.583853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.583887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.584159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.693 [2024-07-15 13:04:23.584213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.693 qpair failed and we were unable to recover it. 00:25:05.693 [2024-07-15 13:04:23.584492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.584545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.584769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.584803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.585068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.585121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.585394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.585449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.585683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.585717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.585936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.585970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.586210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.586264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.586531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.586585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.586858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.586892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.587118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.587170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.587443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.587496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.587766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.587800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.588046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.588100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.588364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.588677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.588711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.588934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.588968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.589236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.589288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.589518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.589569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.589748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.589782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.590041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.590075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.590282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.590336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.590549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.590603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.590798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.590832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.591074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.591126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.591353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.591405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.591663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.591697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.591977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.592011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.592295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.592347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.592604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.592656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.592865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.592899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.593133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.593188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.593455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.593507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.593716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.593757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.594015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.594048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.594324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.594377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.594645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.594968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.595002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.694 [2024-07-15 13:04:23.595286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.694 [2024-07-15 13:04:23.595340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.694 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.595610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.595665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.595925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.595958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.596229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.596283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.596570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.596624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.596889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.596922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.597161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.597216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.597484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.597537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.597753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.597787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.597990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.598024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.598273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.598327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.598557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.598609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.598859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.598894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.599161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.599215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.599454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.599508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.599713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.599764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.600032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.600071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.600299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.600353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.600578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.600630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.600881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.600915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.601136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.601188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.601415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.601469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.601689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.601723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.601944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.601978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.602197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.602250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.602487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.602543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.602794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.602828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.603100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.603155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.603428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.603483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.603754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.603788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.604009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.604043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.604279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.604334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.604607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.604660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.604920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.604955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.605149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.605202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.605423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.605475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.605727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.605768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.606028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.606061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.695 qpair failed and we were unable to recover it. 00:25:05.695 [2024-07-15 13:04:23.606330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.695 [2024-07-15 13:04:23.606384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.606604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.606657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.606881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.606915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.607187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.607242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.607512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.607570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.607808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.607842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.608050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.608104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.608336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.608391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.608586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.608619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.608802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.608868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.609046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.609100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.609365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.609419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.609671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.609704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.609938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.609991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.610227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.610281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.610506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.610560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.610813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.610847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.611077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.611133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.611335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.611395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.611609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.611642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.611907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.611941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.612174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.612227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.612437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.612491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.612754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.612788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.613012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.613045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.613265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.613318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.613588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.613644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.613851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.613886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.614152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.614205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.614478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.614531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.614795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.614829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.615096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.615428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.615482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.615754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.696 [2024-07-15 13:04:23.615789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.696 qpair failed and we were unable to recover it. 00:25:05.696 [2024-07-15 13:04:23.616057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.616354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.616407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.616642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.616695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.616968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.617002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.617191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.617243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.617507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.617561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.617793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.617827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.618074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.618107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.618372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.618426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.618672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.618726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.618997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.619030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.619264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.619319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.619553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.619606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.619864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.619899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.620115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.620168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.620394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.620447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.620700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.620734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.621006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.621040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.621277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.621330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.621604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.621921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.621955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.622188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.622241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.622517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.622570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.622839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.622873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.623089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.623149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.623385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.623438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.623685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.623719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.623898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.623932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.624217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.624272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.624532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.624586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.624800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.624834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.625100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.625155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.625423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.625478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.625751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.625786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.697 qpair failed and we were unable to recover it. 00:25:05.697 [2024-07-15 13:04:23.626046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.697 [2024-07-15 13:04:23.626079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.626273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.626329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.626603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.626656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.626935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.626969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.627160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.627215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.627475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.627530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.627793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.627827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.628063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.628116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.628384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.628436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.628698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.628732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.628918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.628952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.629159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.629214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.629436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.629491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.629759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.629793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.630053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.630087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.630311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.630364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.630594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.630646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.630878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.630912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.631137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.631191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.631428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.631484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.631724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.631776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.631976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.632010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.632226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.632279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.632555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.632610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.632826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.632860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.633087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.633141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.633405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.633459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.633727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.698 [2024-07-15 13:04:23.633769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.698 qpair failed and we were unable to recover it. 00:25:05.698 [2024-07-15 13:04:23.633998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.634031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.634300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.634353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.634631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.634692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.634951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.634985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.635207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.635260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.635533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.635583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.635856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.635889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.636094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.636147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.636361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.636414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.636635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.636668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.636969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.637025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.637299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.637353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.637632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.637686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.637956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.637990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.638271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.638331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.638601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.638655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.638881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.638916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.639181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.639234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.639443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.639497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.639761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.639795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.640015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.640070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.640326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.640381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.640611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.640666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.640886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.640921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.641098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.641153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.641402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.641456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.699 qpair failed and we were unable to recover it. 00:25:05.699 [2024-07-15 13:04:23.641709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.699 [2024-07-15 13:04:23.641751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.641948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.641982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.642214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.642269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.642514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.642569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.642777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.642811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.643092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.643150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.643380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.643435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.643709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.643750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.644011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.644044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.644324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.644375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.644557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.644610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.644867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.644902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.645175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.645228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.645454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.645508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.645767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.645802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.646047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.646081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.646342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.646401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.646628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.646684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.646950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.646984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.647254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.647306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.647538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.647591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.647841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.647875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.648102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.648155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.648417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.648469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.648747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.648782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.648945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.648979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.649219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.649272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.700 [2024-07-15 13:04:23.649500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.700 [2024-07-15 13:04:23.649554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.700 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.649788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.649822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.650027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.650082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.650353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.650405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.650682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.650745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.651002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.651035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.651316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.651368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.651644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.651698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.651935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.651969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.652247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.652301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.652522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.652577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.652777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.652833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.653101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.653158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.653443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.653493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.653765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.653810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.654035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.654070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.654361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.654414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.654601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.654655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.654913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.654947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.655183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.655237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.655475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.655527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.655787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.655821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.656048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.656102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.656381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.656436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.656693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.656727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.656994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.657028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.657296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.701 [2024-07-15 13:04:23.657348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.701 qpair failed and we were unable to recover it. 00:25:05.701 [2024-07-15 13:04:23.657613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.657665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.657876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.657910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.658166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.658225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.658487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.658541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.658745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.658779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.659030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.659064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.659306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.659567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.659621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.659829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.659863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.660134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.660188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.660428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.660481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.660749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.660783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.661051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.661085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.661343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.661396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.661676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.661730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.662011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.662045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.662261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.662314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.662575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.662629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.662895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.662929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.663157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.702 [2024-07-15 13:04:23.663210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.702 qpair failed and we were unable to recover it. 00:25:05.702 [2024-07-15 13:04:23.663480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.663533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.663786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.663820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.664057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.664110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.664333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.664388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.664651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.664704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.664976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.665011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.665230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.665283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.665524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.665578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.665806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.665839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.666068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.666120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.666392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.666445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.666669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.666702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.666920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.666954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.667221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.667272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.667465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.667520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.667748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.667782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.667982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.668016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.668298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.668349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.668621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.668675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.668939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.668973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.669235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.669286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.669552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.669607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.669845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.669885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.670170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.670224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.670461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.670514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.670779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.670813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.703 [2024-07-15 13:04:23.671074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.703 [2024-07-15 13:04:23.671126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.703 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.671362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.671415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.671670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.671703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.671931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.671965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.672201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.672255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.672523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.672574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.672805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.672839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.673084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.673140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.673412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.673466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.673719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.673761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.674023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.674056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.674321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.674373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.674596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.674647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.674912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.674945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.675177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.675231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.675504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.675559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.675776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.675810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.676041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.676094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.676324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.676377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.676635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.676687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.676950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.676984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.677251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.677303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.677567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.677622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.677884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.677919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.678187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.678241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.678507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.678562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.678822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.678856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.704 [2024-07-15 13:04:23.679126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.704 [2024-07-15 13:04:23.679180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.704 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.679401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.679454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.679674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.679707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.679933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.679967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.680231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.680284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.680518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.680571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.680756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.680809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.681041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.681096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.681372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.681426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.681692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.681730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.682009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.682043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.682274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.682329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.682590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.682646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.682875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.682909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.683135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.683187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.683456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.683508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.683770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.683804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.684070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.684121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.684396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.684449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.684671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.684705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.684926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.684960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.685225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.685278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.685554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.685608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.685848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.685882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.686158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.686210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.686427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.686480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.686679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.686713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.686969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.705 [2024-07-15 13:04:23.687003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.705 qpair failed and we were unable to recover it. 00:25:05.705 [2024-07-15 13:04:23.687276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.687328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.687602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.687655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.687917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.687951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.688162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.688214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.688488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.688543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.688762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.688796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.689060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.689113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.689374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.689428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.689660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.689693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.689964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.689998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.690231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.690284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.690555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.690609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.690876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.690910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.691141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.691193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.691456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.691508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.691778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.691812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.692074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.692126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.692405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.692460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.692717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.692758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.692972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.693005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.693278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.693332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.693575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.693633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.693847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.693881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.694143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.694197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.694467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.694522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.694781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.694816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.706 [2024-07-15 13:04:23.695081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.706 [2024-07-15 13:04:23.695136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.706 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.695400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.695452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.695694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.695728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.695962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.695995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.696210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.696262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.696471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.696526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.696786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.696820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.697052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.697102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.697375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.697427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.697649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.697682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.697945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.697979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.698222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.698278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.698544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.698596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.698791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.698825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.699087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.699142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.699363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.699415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.699664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.699698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.699962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.699996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.700174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.700228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.700486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.700758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.700792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.701054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.701109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.701399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.701451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.701712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.701762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.702023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.702058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.702274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.702327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.702561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.702614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.702874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.702909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.703179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.703230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.703454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.703509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.703765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.703799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.703986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.704040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.707 qpair failed and we were unable to recover it. 00:25:05.707 [2024-07-15 13:04:23.704301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.707 [2024-07-15 13:04:23.704352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.704620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.704674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.704909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.704943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.705142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.705195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3498551 Killed "${NVMF_APP[@]}" "$@" 00:25:05.708 [2024-07-15 13:04:23.705483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.705539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:05.708 [2024-07-15 13:04:23.705803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.705837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:05.708 [2024-07-15 13:04:23.706086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.706145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:05.708 [2024-07-15 13:04:23.706376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.706429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:05.708 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.708 [2024-07-15 13:04:23.706645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.706678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.706906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.706939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.707163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.707217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.707456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.707511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.707782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.707815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.708003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.708036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.708227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.708267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.708433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.708486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.708645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.708679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.708820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.708882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.709044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.709099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.709262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.709315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.709459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.709517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.709680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.709714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.709904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.709938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.710203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.710237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.710454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.710488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.710755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.710790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.710950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.710984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.711256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.711308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.708 [2024-07-15 13:04:23.711581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.708 [2024-07-15 13:04:23.711636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.708 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.711812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.711847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.711982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.712015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3499102 00:25:05.709 [2024-07-15 13:04:23.712184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:05.709 [2024-07-15 13:04:23.712248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3499102 00:25:05.709 [2024-07-15 13:04:23.712456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.712508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3499102 ']' 00:25:05.709 [2024-07-15 13:04:23.712670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.712702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.709 [2024-07-15 13:04:23.712847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.712880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:05.709 [2024-07-15 13:04:23.713164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.713222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 13:04:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.709 [2024-07-15 13:04:23.713421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.713476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.713605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.713639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.713800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.713834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.713969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.714004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.714172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.714207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.714366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.714400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.714590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.714624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.714769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.714806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.714973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.715009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.715178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.715213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.715369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.715403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.715561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.715596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.715761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.715806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.716000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.716035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.716234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.716290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.716468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.716503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.716664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.716698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.716896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.716930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.717109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.717163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.717336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.717389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.717550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.717585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.717732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.717780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.717902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.717936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.718128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.718163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.718291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.718326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.718491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.718525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.718706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.718750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.718888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.718927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.709 [2024-07-15 13:04:23.719193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.709 [2024-07-15 13:04:23.719249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.709 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.719447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.719503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.719632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.719666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.719805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.719840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.719999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.720226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.720409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.720596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.720754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.720942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.720975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.721119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.721173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.721377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.721411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.721560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.721595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.721751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.721786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.721970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.722158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.722374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.722545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.722726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.722955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.722990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.723128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.723188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.723338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.723372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.723524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.723558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.723720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.723763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.723893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.723927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.724075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.724108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.724302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.724337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.724523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.724558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.724685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.724719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.724875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.724909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.725050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.725085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.725230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.725265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.725445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.725478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.725648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.725681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.725830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.725865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.710 [2024-07-15 13:04:23.726917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.710 [2024-07-15 13:04:23.726952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.710 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.727102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.727136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.727313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.727347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.727471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.727506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.727654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.727688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.727808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.727842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.728896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.728930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.729076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.729111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.729247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.729281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.729402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.729436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.729592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.729625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.729888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.729923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.730080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.730113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.730269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.730302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.730442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.730476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.730656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.730690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.730858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.730894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.731924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.732110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.732144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.732264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.732297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.732451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.732486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.732634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.732668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.732831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.732892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.733051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.733108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.733282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.733336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.733509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.733544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.733691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.733725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.733890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.733924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.734155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.734211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.734411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.734472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.734626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.711 [2024-07-15 13:04:23.734660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.711 qpair failed and we were unable to recover it. 00:25:05.711 [2024-07-15 13:04:23.734835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.734869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.735021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.735076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.735270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.735326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.735472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.735507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.735685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.735891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.735924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.736086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.736142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.736273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.736335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.736497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.736532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.736677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.736711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.736879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.736912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.737065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.737276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.737452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.737659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.737817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.737978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.738175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.738377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.738557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.738711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.738907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.738942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.739073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.739137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.739306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.739340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.739518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.739553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.739693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.739728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.739881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.739915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.740129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.740277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.740455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.740628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.740835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.740980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.741015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.741161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.741195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.712 qpair failed and we were unable to recover it. 00:25:05.712 [2024-07-15 13:04:23.741312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.712 [2024-07-15 13:04:23.741346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.741492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.741526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.741655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.741688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.741813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.741848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.741989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.742168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.742324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.742526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.742682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.742896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.742930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.743942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.743977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.744151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.744185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.744324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.744358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.744508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.744543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.744700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.744735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.744866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.744900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.745946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.745981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.746128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.746307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.746342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.746482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.746517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.746651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.746686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.746844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.746879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.747910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.747944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.748090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.748125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.748235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.748270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.713 [2024-07-15 13:04:23.748412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.713 [2024-07-15 13:04:23.748446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.713 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.748595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.748630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.748757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.748792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.748939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.748974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.749151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.749296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.749439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.749642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.749823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.749992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.750160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.750364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.750543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.750686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.750851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.750886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.751035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.751069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.751254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.751289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.751431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.751466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.751588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.751623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.751842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.751877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.752034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.752069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.752220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.752254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.752469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.752503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.752646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.752680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.752864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.752899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.753071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.753105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.753276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.753310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.753460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.753494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.753612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.753646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.753854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.753911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.754057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.754091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.754220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.754255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.754502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.754537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.754680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.754714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.754866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.754901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.755040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.755073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.755299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.755333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.755445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.755480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.755660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.755694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.755817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.755851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.756064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.714 [2024-07-15 13:04:23.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.714 qpair failed and we were unable to recover it. 00:25:05.714 [2024-07-15 13:04:23.756210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.756244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.756394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.756427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.756636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.756671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.756806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.756844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.756985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.757156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.757334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.757566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.757749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.757935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.757969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.758144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.758199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.758349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.758384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.758522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.758556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.758699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.758733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.758861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.759067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.759101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.759222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.759256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.759453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.759488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.759641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.759676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.759845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.759881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.760026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.760060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.760240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.760274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.760418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.760453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.760627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.760672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.760826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.760861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.761903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.761938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.762093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.762126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.762314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.762349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.762397] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:25:05.715 [2024-07-15 13:04:23.762473] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.715 [2024-07-15 13:04:23.762491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.762524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.762702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.762733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.762859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.762892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.763038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.763071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.763243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.763277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.763422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.763457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.715 qpair failed and we were unable to recover it. 00:25:05.715 [2024-07-15 13:04:23.763580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.715 [2024-07-15 13:04:23.763614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.763791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.763825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.763990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.764025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.764168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.764206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.764329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.764363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.764559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.764595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.764772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.764979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.765136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.765313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.765551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.765756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.765937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.765971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.766180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.766215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.766359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.766393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.766534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.766568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.766693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.766729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.766968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.767003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.767172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.767207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.767327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.767361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.767538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.767687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.767722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.767971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.768162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.768366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.768608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.768764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.768942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.768976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.769176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.769341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.769491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.769694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.769845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.769982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.770140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.770346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.770523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.770726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.770890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.770924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.771059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.771094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.716 qpair failed and we were unable to recover it. 00:25:05.716 [2024-07-15 13:04:23.771268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.716 [2024-07-15 13:04:23.771302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.771424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.771457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.771571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.771605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.771778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.771817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.771962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.771996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.772142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.772177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.772345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.772380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.772498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.772533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.772702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.772746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.772891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.772925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.773966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.773999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.774156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.774210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.774358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.774392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.774511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.774545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.774720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.774762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.774920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.774974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.775160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.775216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.775359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.775393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.775507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.775541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.775689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.775722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.775914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.775950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.776127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.776306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.776508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.776653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.776853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.776974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.777007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.777185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.777219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.777395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.777430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.777570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.777604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.777754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.717 [2024-07-15 13:04:23.777788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.717 qpair failed and we were unable to recover it. 00:25:05.717 [2024-07-15 13:04:23.777938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.777971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.778913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.778946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.779092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.779132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.779272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.779306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.779497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.779532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.779677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.779711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.779868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.779902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.780941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.780976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.781963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.781998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.782139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.782172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.782341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.782374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.782493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.782528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.782671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.782704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.782824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.782858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.783917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.783950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.784102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.784136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.784282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.784317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.784456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.784490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.784639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.784673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.784839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.784875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.718 qpair failed and we were unable to recover it. 00:25:05.718 [2024-07-15 13:04:23.785018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.718 [2024-07-15 13:04:23.785053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.785205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.785238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.785413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.785447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.785591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.785625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.785769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.785804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.785945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.785979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.786137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.786171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.786342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.786381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.786505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.786539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.786681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.786716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.786896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.786931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.787077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.787130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.787302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.787336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.787507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.787541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.787656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.787691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.787825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.787894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.788065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.788099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.788239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.788272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.788440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.788474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.788618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.788652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.788802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.788864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.789877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.789913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.790951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.790985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.791105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.791138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.791293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.791326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.791496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.791674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.791709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.791832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.791865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.792008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.792041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.792183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.792217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.719 [2024-07-15 13:04:23.792359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.719 [2024-07-15 13:04:23.792394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.719 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.792540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.792573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.792699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.792734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.792892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.792926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.793128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.793308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.793480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.793687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.793874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.793999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.794201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.794383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.794562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.794715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.794884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.794918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.795122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.795302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.795500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.795673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.795849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.795995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.796200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.796405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.796578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.796756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.796941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.796998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.797156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.797214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.797358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.797392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.797508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.797541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.797684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.797719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.797875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.797910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.798931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.798965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.799116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.799150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.799295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.799330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.799451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.799486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.799632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.720 [2024-07-15 13:04:23.799666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.720 qpair failed and we were unable to recover it. 00:25:05.720 [2024-07-15 13:04:23.799812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.799846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.799970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.800005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.800143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.800177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.800322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.800357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.800508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.721 [2024-07-15 13:04:23.800542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.800658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.800691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.800842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.800883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.801885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.801919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.802927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.803128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.803162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.803312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.803346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.803482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.803516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.803659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.803693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.803879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.803906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.804880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.804906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.721 [2024-07-15 13:04:23.805787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.721 [2024-07-15 13:04:23.805813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.721 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.805949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.805976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.806938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.806964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.807955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.807982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.808885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.808911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.809935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.809960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.810931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.810957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.811112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.811139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.811361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.811398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.811578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.811603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.811764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.811791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.811997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.812023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.812238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.812264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.722 [2024-07-15 13:04:23.812360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.722 [2024-07-15 13:04:23.812385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.722 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.812495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.812520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.812689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.812715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.812862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.812890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.813960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.813987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.814246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.814400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.814550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.814703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.814869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.814990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.815803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.815982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.816899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.816995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.817160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.817295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.817537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.817716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.817862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.817888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.818951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.818978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.819132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.723 [2024-07-15 13:04:23.819158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.723 qpair failed and we were unable to recover it. 00:25:05.723 [2024-07-15 13:04:23.819294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.819320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.819472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.819499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.819611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.819647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.819809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.819835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.819941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.819968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.820883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.820913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.821145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.821310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.821563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.821708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.821866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.821985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.822137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.822287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.822440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.822632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.822783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.822810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.823882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.823909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.824078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.824239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.824625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.824851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.824985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.825137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.825339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.825503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.825744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.825908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.825935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.826098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.826137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.724 qpair failed and we were unable to recover it. 00:25:05.724 [2024-07-15 13:04:23.826276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.724 [2024-07-15 13:04:23.826300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.826465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.826505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.826675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.826709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.826910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.826937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.827121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.827291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.827459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.827695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.827887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.827993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.828161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.828409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.828549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.828775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.828930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.828957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.829062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.829087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.829300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.829336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.829494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.829518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.829716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.829761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.829909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.829934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.830138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.830336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.830531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.830668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.830844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.830983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.831142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.831356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.831518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.831753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.831950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.831975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.832098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.832123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.832297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.832322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.832536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.832561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.832709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.832753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.725 [2024-07-15 13:04:23.832902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.725 [2024-07-15 13:04:23.832928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.725 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.833178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.833324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.833505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.833688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.833858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.833984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.834203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.834395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.834620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.834769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.834925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.834950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.835065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.835089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.835247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.835274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.835467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.835506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.835668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.835696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.835883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.835908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.836036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.836074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.836404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.836443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.836615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.726 [2024-07-15 13:04:23.836658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.836681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.836810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.836837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.836941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.836966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.837143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.837168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.837319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.837344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.837517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.837542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.837723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.837766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.837884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.837909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.838909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.838934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.839077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.839101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.839298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.839322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.839500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.839525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.839673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.839697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.839881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.839906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.840071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.840095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.840246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.840271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.726 [2024-07-15 13:04:23.840451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.726 [2024-07-15 13:04:23.840476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.726 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.840587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.840626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.840804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.840832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.840933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.840960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.841099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.841123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.841330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.841355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.841493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.841532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.841775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.841799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.841935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.841960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.842094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.842118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.842281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.842320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.842492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.842532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.842682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.842705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.842833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.842858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.843057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.843081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.843236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.843260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.843437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.843460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.843637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.843660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.843795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.843836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.844036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.844070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.844209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.844233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.844469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.844494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.844667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.844692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.844809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.844836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.845052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.845255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.845475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.845633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.845818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.845985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.846203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.846356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.846590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.846763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.846961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.846999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.847161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.847186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.847369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.847393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.847507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.847546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.847686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.847712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.847945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.847982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.848141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.727 [2024-07-15 13:04:23.848176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.727 qpair failed and we were unable to recover it. 00:25:05.727 [2024-07-15 13:04:23.848353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.848389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.848553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.848588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.848745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.848771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.848958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.848995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.849137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.849176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.849334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.849358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.849494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.849533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.849685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.849724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.849919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.849945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.850871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.850896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.851098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.851264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.851450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.851636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.851989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.852185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.852427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.852619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.852786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.852921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.852948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.853157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.853182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.853328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.853355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.853496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.853536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.853753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.853783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.853939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.853964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.854199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.854223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.854377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.854400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.854613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.854637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.854787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.854814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.854990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.855150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.855301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.855568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.855774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.855915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.855940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.856159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.728 [2024-07-15 13:04:23.856184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.728 qpair failed and we were unable to recover it. 00:25:05.728 [2024-07-15 13:04:23.856295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.856319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.856480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.856504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.856631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.856669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.856789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.856815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.857069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.857253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.857397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.857632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.857793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.857981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.858166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.858355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.858607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.858763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.858933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.858958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.859109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.859147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.859285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.859310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.859534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.859558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.859701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.859745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.859936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.860111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.860136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.860368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.860393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.860577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.860611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.860720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.860750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.860904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.860930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.861117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.861146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.861323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.861348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.861464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.861489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.861675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.861724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.861931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.861957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.862105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.862130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.862272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.862311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.862453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.862491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.862686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.862711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.862895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.862921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.863024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.863275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.863299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.863438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.863463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.729 [2024-07-15 13:04:23.863609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.729 [2024-07-15 13:04:23.863649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.729 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.863803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.863827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.864972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.864999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.865174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.865199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.865352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.865376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.865512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.865550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.865798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.865824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.865948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.865973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.866169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.866193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.866361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.866385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.866496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.866659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.866684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.866934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.866970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.867110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.867134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.867321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.867345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.867486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.867509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.867667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.867706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.867905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.867930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.868954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.868979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.869190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.869213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.869346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.869371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.869565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.869589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.869754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.869794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.869976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.870154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.870282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.870452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.870673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.870857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.870883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.730 qpair failed and we were unable to recover it. 00:25:05.730 [2024-07-15 13:04:23.871909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.730 [2024-07-15 13:04:23.871934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.872130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.872172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.872289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.872313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.872523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.872547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.872763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.872788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.872929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.872955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.873139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.873164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.873303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.873328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.873469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.873495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.873684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.873709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.873896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.873922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.874081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.874121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.874335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.874360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.874508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.874546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.874671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.874696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.874880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.874921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.875080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.875104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.875317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.875342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.875452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.875478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.875610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.731 [2024-07-15 13:04:23.875635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:05.731 qpair failed and we were unable to recover it. 00:25:05.731 [2024-07-15 13:04:23.875783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.875823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.876931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.876958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.877943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.877969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.878898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.878925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.879915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.879943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.880077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.880298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.880324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.880546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.880589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.880778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.880808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.880907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.880939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.881109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.881137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.881332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.881358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.881555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.881580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.881684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.881735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.881992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.882019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.882147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.882173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.027 [2024-07-15 13:04:23.882465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.027 [2024-07-15 13:04:23.882489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.027 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.882657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.882682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.882856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.882883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.883008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.883048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.883181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.883207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.883408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.883433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.883643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.883677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.884949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.884975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.885135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.885160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.885295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.885336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.885588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.885613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.885803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.885829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.885978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.886027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.886154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.886193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.886425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.886450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.886656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.886681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.886863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.886888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.887894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.887920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.888143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.888281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.888453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.888604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.888839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.888977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.889200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.889388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.889645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.889806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.889923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.889948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.890087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.890113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.890204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.890228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.890346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.890376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.890614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.890638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.890811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.890837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.891895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.891919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.892923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.892949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.893070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.893117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.893319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.893344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.893471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.893496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.893789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.893832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.894886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.894911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.895109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.895144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.895271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.895295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.028 qpair failed and we were unable to recover it. 00:25:06.028 [2024-07-15 13:04:23.895503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.028 [2024-07-15 13:04:23.895528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.895711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.895757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.895886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.895911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.896934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.896959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.897152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.897191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.897356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.897381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.897577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.897601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.897774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.897814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.897989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.898876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.898999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.899182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.899381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.899541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.899766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.899912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.899938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.900943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.900969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.901140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.901299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.901464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.901611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.901822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.901974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.902129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.902395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.902554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.902731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.902941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.902966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.903126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.903150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.903306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.903331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.903478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.903515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.903761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.903789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.903956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.903979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.904135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.904159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.904309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.904348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.904493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.904532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.904691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.904729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.904899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.904926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.905936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.905978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.906129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.906168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.906343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.906367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.906518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.906541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.906683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.906721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.906901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.906926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.907069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.029 [2024-07-15 13:04:23.907107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.029 qpair failed and we were unable to recover it. 00:25:06.029 [2024-07-15 13:04:23.907348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.907372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.907513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.907536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.907717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.907747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.907884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.907908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.908109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.908132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.908265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.908289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.908503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.908535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.908682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.908706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.908846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.908872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.909893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.909920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.910084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.910110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.910282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.910307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.910492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.910532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.910691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.910715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.910857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.910882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.911045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.911070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.911200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.911269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.911442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.911480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.911624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.911648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.911855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.911880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.912904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.912928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.913243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.913268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.913484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.913517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.913629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.913670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.913899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.913925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.914924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.914949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.915131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.915156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.915366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.915391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.915532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.915556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.915690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.915716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.915913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.915939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.916970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.916996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.917133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.917158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.917323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.917347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.917530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.917555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.917722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.917788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.917915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.917939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.918146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.918170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.918321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.918346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.918527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.918563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.918703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.918727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.918897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.918943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.919042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.919068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.919270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.919295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.919441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.919465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.919671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.919696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.919877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.919903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.920029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.030 [2024-07-15 13:04:23.920068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.030 qpair failed and we were unable to recover it. 00:25:06.030 [2024-07-15 13:04:23.920252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.920288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.920403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.920443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.920574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.920601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.920783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.920810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.920952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.920979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.921108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.921133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.921317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.921342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.921541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.921579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.921764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.921813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.921975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.922190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.922377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.922584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.922791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.922937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.922977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.923160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.923184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.923378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.923402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.923546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.923571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.923693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.923734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.923955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.923982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.924156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.924181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.924356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.924381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.924497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.924523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.924688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.924728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.924869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.924896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.925164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.925189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.925394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.925419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.925569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.925593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.925763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.925806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.925946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.925972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.926134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.926159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.926321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.926345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.926532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.926557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.926701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.926730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.926908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.926934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.927121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.927267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.927481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.927618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.927802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.927976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.928228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.928376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.928546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.928723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.928942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.928968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.929112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.929151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.929336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.929371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.929516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.929540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.929730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.929786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.929964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.930942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.930968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.931109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.931148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.931326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.931351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.931525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.931550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.931704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.931728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.931957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.931982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.932150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.932175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.932322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.932347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.932534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.932559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.932708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.932732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.031 qpair failed and we were unable to recover it. 00:25:06.031 [2024-07-15 13:04:23.932925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.031 [2024-07-15 13:04:23.932962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.933179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.933398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.933549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.933702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.933841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.933988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.934201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.934380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.934556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.934730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.934960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.934986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.935137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.935162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.935344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.935368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.935530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.935554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.935769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.935817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.935945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.935970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.936158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.936183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.936323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.936358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.936539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.936562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.936722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.936751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.936901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.936927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.937924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.937950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.938104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.938129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.938274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.938313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.938487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.938697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.938721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.938869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.938895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.939895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.939921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.940113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.940139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.940296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.940336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.940474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.940499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.940632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.940658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.940804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.940832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.941868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.941895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.942107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.942131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.942261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.942300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.942438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.942464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.942702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.942747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.942898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.942923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.943963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.943987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.944164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.944189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.944336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.944360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.032 [2024-07-15 13:04:23.944483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.032 [2024-07-15 13:04:23.944507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.032 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.944634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.944659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.944793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.944819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.945960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.945987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.946215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.946249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.946428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.946453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.946647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.946685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.946834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.946859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.946976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.947131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.947295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.947464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.947685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.947919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.947944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.948115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.948154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.948318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.948343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.948506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.948545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.948703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.948727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.948865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.948891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.949026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.949056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.949214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.949252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.949396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.949420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.949619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.949644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.949820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.949846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.950050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.950100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.950254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.950277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.950416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.950628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.950652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.950792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.950817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.951941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.951968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.952188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.952403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.952427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.952603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.952627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.952772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.952814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.952980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.953156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.953370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.953529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.953729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.953912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.953936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.954136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.954160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.954328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.954353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.954463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.954487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.954653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.954678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.954829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.954854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.955854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.955894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.956126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.956151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.956290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.956313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.956416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.956440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.956699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.956749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.957059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.957204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.957369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.957609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.033 [2024-07-15 13:04:23.957791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.033 qpair failed and we were unable to recover it. 00:25:06.033 [2024-07-15 13:04:23.957994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.958030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.958204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.958229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.958391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.958416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.958618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.958642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.958795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.958820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.959894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.959919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.960896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.960923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.961096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.961120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.961270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.961293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.961440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.961479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.961697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.961729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.961903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.961929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.962077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.962101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.962242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.962281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.962518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.962542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.962692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.962716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.962863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.962903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.963029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.963054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.963237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.963263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.963395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.963435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.963561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.963586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.964140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.964185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.964358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.964386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.964515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.964542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.964694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.964724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.964847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.964875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.965888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.965916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.966853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.966896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.967969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.967995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.968904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.968930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.969909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.969935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.970079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.970104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.034 qpair failed and we were unable to recover it. 00:25:06.034 [2024-07-15 13:04:23.970215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.034 [2024-07-15 13:04:23.970242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.970387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.970413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.970564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.970590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.970785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.970813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.970947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.970977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.971935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.971961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.972173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.972199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.972399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.972439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.972572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.972598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.972795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.972822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.972925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.972952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.973931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.973957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.974187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.974313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.974523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.974660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.974891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.974995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.975902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.975928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.035 [2024-07-15 13:04:23.976432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b9[2024-07-15 13:04:23.976433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:25:06.035 at runtime. 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.035 [2024-07-15 13:04:23.976463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.035 [2024-07-15 13:04:23.976474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.035 [2024-07-15 13:04:23.976598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 [2024-07-15 13:04:23.976574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:06.035 [2024-07-15 13:04:23.976839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:06.035 [2024-07-15 13:04:23.976780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.035 [2024-07-15 13:04:23.976865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.976961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.976985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.977183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.977208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.977367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.977398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.977609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.977636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.977805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.977832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.977969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.977995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.978228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.978257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.978387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.978413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.978542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.978568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.978711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.978753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.978887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.978914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.979057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.979083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.979220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.979246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.979453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.979491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.979709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.979752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.979882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.979909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.980047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.980073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.980253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.980282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.980422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.035 [2024-07-15 13:04:23.980447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.035 qpair failed and we were unable to recover it. 00:25:06.035 [2024-07-15 13:04:23.980597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.980623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.980822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.980849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.981969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.981995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.982194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.982360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.982550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.982699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.982846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.982974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.983846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.983976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.984924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.984950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.985918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.985944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.986873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.986991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.987966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.987991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.988147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.988173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.988371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.988396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.988521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.988546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.988693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.988720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.988865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.988891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.989078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.989104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.989280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.989306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.989505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.989531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.989699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.989725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.989899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.989924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.990125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.990151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.990313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.990342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.990564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.990590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.990729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.990780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.990940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.990966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.991194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.991220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.991357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.991382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.991531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.991566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.991730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.991769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.991927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.991953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.992139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.992166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.036 [2024-07-15 13:04:23.992377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.036 [2024-07-15 13:04:23.992403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.036 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.992587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.992613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.992751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.992779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.992906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.992931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.993078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.993104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.993241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.993267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.993424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.993450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.993644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.993669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.993853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.994079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.994105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.994282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.994505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.994532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.994665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.994690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.994825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.994851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.995866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.995892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.996886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.996913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.997953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.997991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.998942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.998972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.999156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.999368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.999557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.999747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:23.999903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:23.999990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.000114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.000238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.000523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.000726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.000893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.000918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.001103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.001129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.001265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.001291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.001404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.001429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.001582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.037 [2024-07-15 13:04:24.001609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.037 qpair failed and we were unable to recover it. 00:25:06.037 [2024-07-15 13:04:24.001750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.001787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.001901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.001925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.002864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.002891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.003050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.003076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.003237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.003263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.003457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.003483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.003647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.003682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.003826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.003852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.004965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.004991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.005887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.005916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.006892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.006919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.007835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.007859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.008035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1babae0 (9): Bad file descriptor 00:25:06.038 [2024-07-15 13:04:24.008343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.008385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.008544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.008724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.008863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.008976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.009148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.009334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.009501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.009675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.009849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.009876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.010926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.010951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.011089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.011126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.011280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.011306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.011418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.011443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.011594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.011620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.038 qpair failed and we were unable to recover it. 00:25:06.038 [2024-07-15 13:04:24.011761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.038 [2024-07-15 13:04:24.011787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.011934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.011971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.012959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.012992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.013902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.013929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.014962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.014988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.015166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.015192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.015343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.015369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.015507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.015534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.015683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.015710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.015857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.015898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.016856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.016883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.017968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.017994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.018869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.018894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.019831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.019979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.020177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.020351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.020507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.020709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.020850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.020877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.021036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.021062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.021198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.021226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.021367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.021393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.021553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.021583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.039 [2024-07-15 13:04:24.021728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.039 [2024-07-15 13:04:24.021760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.039 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.021862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.021887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.021991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.022154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.022340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.022604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.022792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.022934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.022960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.023150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.023358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.023493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.023681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.023860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.023967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.024118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.024343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.024509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.024717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.024859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.024885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.025842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.025868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.026952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.026978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.027157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.027184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.027358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.027385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.027554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.027579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.027744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.027769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.027876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.027902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.028941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.028966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.029163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.029189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.029319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.029350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.029491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.029516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.029667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.029693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.029855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.029881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.030027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.030203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.030389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.030602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.040 [2024-07-15 13:04:24.030777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.040 qpair failed and we were unable to recover it. 00:25:06.040 [2024-07-15 13:04:24.030888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.030913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.031920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.031965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.032136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.032308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.032501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.032678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.032842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.032983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.033937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.033964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.034960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.034987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.035128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.035155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.035316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.035348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.035513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.035544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.035703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.035730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.035875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.035902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.036953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.036979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.037950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.037976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.038936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.038963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.039118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.039144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.039282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.039309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.039448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.039474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.041 qpair failed and we were unable to recover it. 00:25:06.041 [2024-07-15 13:04:24.039587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.041 [2024-07-15 13:04:24.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.039752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.039779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dc8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.039926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.039968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.040126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.040304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.040494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.040662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.040862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.040978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.041136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.041357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.041529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.041701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.041857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.041883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.042900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.042925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.043953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.043978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.044934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.044960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.045859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.045887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.046858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.046977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.047145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.047313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.047524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.047709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.047862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.047889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.048002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.048039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.048184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.048210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.048323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.042 [2024-07-15 13:04:24.048349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.042 qpair failed and we were unable to recover it. 00:25:06.042 [2024-07-15 13:04:24.048490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.048515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.048682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.048708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.048836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.048866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.048988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.049124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.049301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.049489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.049703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.049869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.049895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.050870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.050896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.051925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.051950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.052950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.052975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.053120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.053146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.053283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.053309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.053483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.053509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.053677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.053702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.053859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.053886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.054885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.054911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.055862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.055982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.056877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.056995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.043 [2024-07-15 13:04:24.057021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.043 qpair failed and we were unable to recover it. 00:25:06.043 [2024-07-15 13:04:24.057167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.057194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.057482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.057508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.057662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.057688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.057799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.057826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.057945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.057971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.058970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.058997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.059138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.059334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.059501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.059678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.059859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.059992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.060152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.060326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.060530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.060700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.060883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.060909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.061839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.061865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.062860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.062895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.063961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.063985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.064135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.064161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.064300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.064326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.064460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.064485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.064629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.064661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.064839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.064866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.065875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.065901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.066007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.066042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.066215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.066241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.066371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.044 [2024-07-15 13:04:24.066397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.044 qpair failed and we were unable to recover it. 00:25:06.044 [2024-07-15 13:04:24.066542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.066568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.066693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.066730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.066866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.066893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.067967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.067994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.068966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.068991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.069170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.069365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.069524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.069689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.069868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.069981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.070183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.070330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.070509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.070736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.070924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.070950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.071107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.071146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.071346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.071372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.071516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.071541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.071690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.071716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.071874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.071899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.072056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.072282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.072499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.072706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.072880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.072995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.073166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.073415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.073599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.073801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.073931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.073956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.074125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.074346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.074498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.074675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.074882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.074981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.075135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.075330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.075552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.075792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.075921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.075947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.076957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.076983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.077108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.077134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.077282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.077309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.077517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.077542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.077719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.045 [2024-07-15 13:04:24.077752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.045 qpair failed and we were unable to recover it. 00:25:06.045 [2024-07-15 13:04:24.077881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.077906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.078956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.078982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.079957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.079983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.080199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.080339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.080522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.080698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.080875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.080996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.081203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.081369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.081545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.081707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.081910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.081937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.082939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.082965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.083941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.083967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.084919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.084944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.085956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.085981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.086156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.086183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.086336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.086372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.086547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.086572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.086728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.086770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.086889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.086915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.087039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.087064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.087182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.087208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.046 [2024-07-15 13:04:24.087350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.046 [2024-07-15 13:04:24.087376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.046 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.087541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.087566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.087697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.087722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.087880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.087911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.088932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.088957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.089181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.089347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.089527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.089691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.089873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.089987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.090153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.090345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.090508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.090692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.090875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.090901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.091852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.091878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.092950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.092976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.093957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.093982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.094946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.094971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.095912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.095938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.096131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.096158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.096343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.096369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.096508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.096533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.096692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.096718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.096839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.096866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.047 [2024-07-15 13:04:24.097942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.047 [2024-07-15 13:04:24.097968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.047 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.098167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.098311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.098480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.098690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.098842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.098974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.099962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.099988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.100915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.100940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.101129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.101319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.101497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.101668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.101828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.101989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.102938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.102963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.103135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.103162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.103323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.103348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.103560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.103586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.103722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.103755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.103886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.103912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.104896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.104921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.105921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.105947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.106911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.106937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.107888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.107999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.108026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.108139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.108165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.108269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.108295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:06.048 [2024-07-15 13:04:24.108459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.108486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.048 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:06.048 qpair failed and we were unable to recover it. 00:25:06.048 [2024-07-15 13:04:24.108592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.048 [2024-07-15 13:04:24.108618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:06.049 [2024-07-15 13:04:24.108733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.108767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:06.049 [2024-07-15 13:04:24.108897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.108923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.049 [2024-07-15 13:04:24.109051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.109217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.109413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.109580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.109722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.109871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.109896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.110857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.110882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.111953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.111979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.112882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.112996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.113971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.113997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.114959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.114984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.115963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.115990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.116956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.116982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.117151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.117330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.117535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.117666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.049 [2024-07-15 13:04:24.117837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.049 qpair failed and we were unable to recover it. 00:25:06.049 [2024-07-15 13:04:24.117944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.117969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.118880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.118991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.119958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.119983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.120126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.120322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.120481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.120685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.120869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.120982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.121951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.121976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.122936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.122962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.123940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.123965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.124855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.124881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.125924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.125951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.126909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.126935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.127888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.127992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.128016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.050 qpair failed and we were unable to recover it. 00:25:06.050 [2024-07-15 13:04:24.128165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.050 [2024-07-15 13:04:24.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.128345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.128371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.128527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.128552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.128707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.128734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.128864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.128888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.129842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.129868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.051 [2024-07-15 13:04:24.129975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.130135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:06.051 [2024-07-15 13:04:24.130162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.051 [2024-07-15 13:04:24.130319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.051 [2024-07-15 13:04:24.130481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.130637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.130780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.130911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.130938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.131969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.131994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.132916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.132941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.133902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.133927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.134888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.134914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.135890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.135917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.136924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.136954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.051 [2024-07-15 13:04:24.137835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.051 qpair failed and we were unable to recover it. 00:25:06.051 [2024-07-15 13:04:24.137970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.137996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.138972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.138998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.139925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.139950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.140862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.140896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.141876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.141980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.142900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.142991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.143147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.143348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.143531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.143714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.143863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.143889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.144929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.144955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.145902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.145927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.146954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.146979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.052 [2024-07-15 13:04:24.147133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.052 [2024-07-15 13:04:24.147158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.052 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.147307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.147333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.147469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.147494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.147649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.147675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.147787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.147824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.147932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.147965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.148961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.148986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.149960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.149995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.150179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.150205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.150320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.150345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.150477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.150503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.150628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.150654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.150811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.150845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.151852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.151884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.152898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.152923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.153877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.153903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.154957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.154984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.155148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.155176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.155336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 Malloc0 00:25:06.053 [2024-07-15 13:04:24.155361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.155465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.155490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.155648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.155674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.053 [2024-07-15 13:04:24.155826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.155853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.155971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.053 [2024-07-15 13:04:24.156153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.156370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.156562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.156688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.156857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.156883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.053 qpair failed and we were unable to recover it. 00:25:06.053 [2024-07-15 13:04:24.157889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.053 [2024-07-15 13:04:24.157917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.158114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.158306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.158500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.158660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.158866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.158980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.054 [2024-07-15 13:04:24.159140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.159861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.159985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.160146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.160285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.160470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.160633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.160824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.160865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.161925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.161952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.162956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.162982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.163101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.163130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.163335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.163361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd8000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.163470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.163498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.163640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.163666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.163858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.163884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.164969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.164996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.165916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.165942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.166941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.166967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.167100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.167127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.167242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.054 [2024-07-15 13:04:24.167268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:06.054 [2024-07-15 13:04:24.167422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.167449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.054 [2024-07-15 13:04:24.167601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.054 [2024-07-15 13:04:24.167628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.167747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.167773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.167923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.054 [2024-07-15 13:04:24.167949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.054 qpair failed and we were unable to recover it. 00:25:06.054 [2024-07-15 13:04:24.168091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.168118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.168307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.168333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.168459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.168486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.168648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.168674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.168859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.168886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.168994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.169199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.169422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.169617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.169792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.169961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.169988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.170160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.170373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.170400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.170563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.170590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.170779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.170807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.170957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.170983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.171143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.171170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.171334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.171360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.171553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.171580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.171718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.171929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.171965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.172965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.172990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.173160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.173187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.173358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.173385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.173539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.173565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.173698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.173724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.173910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.173938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.174098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.174125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.174276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.174302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.174486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.174512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.174655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.174682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.055 [2024-07-15 13:04:24.174817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.055 [2024-07-15 13:04:24.174848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.055 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.174993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.175189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.056 [2024-07-15 13:04:24.175353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:06.056 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.056 [2024-07-15 13:04:24.175564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.056 [2024-07-15 13:04:24.175725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.175955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.175982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.176167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.176194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.176340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.176366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.176545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.176571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.176758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.176786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.176915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.176942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.177118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.177144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.177314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.177341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.177509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.177545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.177749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.177776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.177962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.177989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.178136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.178162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.178329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.178356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.178521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.178548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.178683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.178709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.178859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.178885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.179918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.179945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.180966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.180993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.181159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.181185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.181330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.181363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.181518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.181544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.181733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.181767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.181900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.056 [2024-07-15 13:04:24.181926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.056 qpair failed and we were unable to recover it. 00:25:06.056 [2024-07-15 13:04:24.182071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.182097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.182216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.182241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.182416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.182452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.182623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.182656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.182805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.182833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.183024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.183184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.057 [2024-07-15 13:04:24.183401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.057 [2024-07-15 13:04:24.183581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.057 [2024-07-15 13:04:24.183774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.183971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.183996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.184142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.184171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.184366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.184394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.184539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.184566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.184704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.184729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.184935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.184961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.185102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.185129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.185299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.185326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.185522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.185548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.185745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.185771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.185938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.185965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.186106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.186133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.186301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.186327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.186494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.186520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.186696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.186723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.186876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.186903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.187036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.187062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.187248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:06.057 [2024-07-15 13:04:24.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7dd0000b90 with addr=10.0.0.2, port=4420 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 [2024-07-15 13:04:24.187335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.057 [2024-07-15 13:04:24.189856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.057 [2024-07-15 13:04:24.190002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.057 [2024-07-15 13:04:24.190029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.057 [2024-07-15 13:04:24.190061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.057 [2024-07-15 13:04:24.190074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.057 [2024-07-15 13:04:24.190109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.057 qpair failed and we were unable to recover it. 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.057 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.315 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.315 13:04:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3498693 00:25:06.315 [2024-07-15 13:04:24.199745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.315 [2024-07-15 13:04:24.199863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.315 [2024-07-15 13:04:24.199891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.315 [2024-07-15 13:04:24.199907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.315 [2024-07-15 13:04:24.199921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.315 [2024-07-15 13:04:24.199952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.315 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.209645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.209760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.209793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.209810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.209823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.209854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.219590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.219705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.219730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.219755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.219769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.219800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.229690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.229799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.229825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.229840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.229853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.229883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.239656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.239759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.239797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.239812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.239827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.239858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.249710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.249855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.249882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.249898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.249911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.249947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.259758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.259859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.259885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.259900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.259913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.259944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.269752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.269865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.269892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.269907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.269920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.269962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.279748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.279904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.279931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.279947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.279960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.279991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.289768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.289909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.289936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.289952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.289965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.289996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.299826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.299994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.300036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.300053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.300082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.300113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.309847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.309967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.309995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.310011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.310024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.310055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.319879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.319994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.320034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.320050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.320063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.320092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.329912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.330010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.330052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.330067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.330080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.330109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.339926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.340045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.316 [2024-07-15 13:04:24.340071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.316 [2024-07-15 13:04:24.340086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.316 [2024-07-15 13:04:24.340099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.316 [2024-07-15 13:04:24.340134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.316 qpair failed and we were unable to recover it. 00:25:06.316 [2024-07-15 13:04:24.350049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.316 [2024-07-15 13:04:24.350165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.350191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.350206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.350218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.350246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.359960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.360069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.360094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.360109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.360121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.360151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.370064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.370184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.370209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.370224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.370236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.370265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.380026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.380154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.380180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.380195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.380208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.380237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.390063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.390177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.390203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.390218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.390231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.390260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.400099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.400190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.400215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.400230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.400243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.400272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.410146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.410237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.410262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.410277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.410289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.410318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.420134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.420236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.420262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.420277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.420289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.420318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.430227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.430322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.430348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.430363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.430380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.430411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.440265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.440357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.440383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.440398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.440410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.440439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.450302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.450448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.450473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.450488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.450501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.450529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.460252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.460351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.460377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.460393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.460405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.460434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.470316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.470424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.470448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.470463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.470476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.470506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.480366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.480459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.480485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.480500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.317 [2024-07-15 13:04:24.480513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.317 [2024-07-15 13:04:24.480541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.317 qpair failed and we were unable to recover it. 00:25:06.317 [2024-07-15 13:04:24.490338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.317 [2024-07-15 13:04:24.490425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.317 [2024-07-15 13:04:24.490450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.317 [2024-07-15 13:04:24.490465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.318 [2024-07-15 13:04:24.490478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.318 [2024-07-15 13:04:24.490507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.318 qpair failed and we were unable to recover it. 00:25:06.318 [2024-07-15 13:04:24.500607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.318 [2024-07-15 13:04:24.500708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.318 [2024-07-15 13:04:24.500756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.318 [2024-07-15 13:04:24.500773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.318 [2024-07-15 13:04:24.500785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.318 [2024-07-15 13:04:24.500823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.318 qpair failed and we were unable to recover it. 00:25:06.318 [2024-07-15 13:04:24.510396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.318 [2024-07-15 13:04:24.510492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.318 [2024-07-15 13:04:24.510517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.318 [2024-07-15 13:04:24.510532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.318 [2024-07-15 13:04:24.510544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.318 [2024-07-15 13:04:24.510573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.318 qpair failed and we were unable to recover it. 00:25:06.318 [2024-07-15 13:04:24.520437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.318 [2024-07-15 13:04:24.520528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.318 [2024-07-15 13:04:24.520554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.318 [2024-07-15 13:04:24.520576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.318 [2024-07-15 13:04:24.520590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.318 [2024-07-15 13:04:24.520620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.318 qpair failed and we were unable to recover it. 00:25:06.575 [2024-07-15 13:04:24.530443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.575 [2024-07-15 13:04:24.530559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.575 [2024-07-15 13:04:24.530585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.575 [2024-07-15 13:04:24.530600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.575 [2024-07-15 13:04:24.530612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.575 [2024-07-15 13:04:24.530640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.575 qpair failed and we were unable to recover it. 00:25:06.575 [2024-07-15 13:04:24.540516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.575 [2024-07-15 13:04:24.540633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.575 [2024-07-15 13:04:24.540658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.575 [2024-07-15 13:04:24.540673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.575 [2024-07-15 13:04:24.540685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.575 [2024-07-15 13:04:24.540715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.575 qpair failed and we were unable to recover it. 00:25:06.575 [2024-07-15 13:04:24.550524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.575 [2024-07-15 13:04:24.550618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.575 [2024-07-15 13:04:24.550643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.575 [2024-07-15 13:04:24.550658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.575 [2024-07-15 13:04:24.550670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.575 [2024-07-15 13:04:24.550699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.575 qpair failed and we were unable to recover it. 00:25:06.575 [2024-07-15 13:04:24.560531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.575 [2024-07-15 13:04:24.560657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.575 [2024-07-15 13:04:24.560682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.575 [2024-07-15 13:04:24.560697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.575 [2024-07-15 13:04:24.560709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.575 [2024-07-15 13:04:24.560744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.575 qpair failed and we were unable to recover it. 00:25:06.575 [2024-07-15 13:04:24.570550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.570640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.570665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.570680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.570692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.570721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.580623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.580734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.580769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.580785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.580798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.580829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.590644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.590760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.590787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.590803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.590815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.590846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.600664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.600790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.600817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.600832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.600845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.600875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.610667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.610770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.610802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.610818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.610831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.610871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.620813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.620926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.620963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.620978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.620991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.621030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.630799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.630896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.630922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.630938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.630951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.630981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.640799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.640913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.640939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.640955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.640968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.640999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.650801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.650896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.650922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.650938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.650950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.650986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.660858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.660956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.660982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.660997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.661010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.661055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.670876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.671000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.671040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.671056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.671068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.671097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.680916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.681052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.681077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.681092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.681104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.681133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.576 [2024-07-15 13:04:24.690941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.576 [2024-07-15 13:04:24.691059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.576 [2024-07-15 13:04:24.691083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.576 [2024-07-15 13:04:24.691098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.576 [2024-07-15 13:04:24.691110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.576 [2024-07-15 13:04:24.691139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.576 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.700991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.701118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.701148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.701164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.701176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.701205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.711023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.711181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.711207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.711222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.711234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.711274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.721040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.721149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.721174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.721190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.721202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.721233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.731047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.731187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.731212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.731227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.731240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.731278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.741126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.741222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.741248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.741263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.741275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.741313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.751167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.751302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.751327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.751341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.751353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.751382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.761158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.761251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.761277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.761292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.761304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.761344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.577 [2024-07-15 13:04:24.771111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.577 [2024-07-15 13:04:24.771248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.577 [2024-07-15 13:04:24.771272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.577 [2024-07-15 13:04:24.771286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.577 [2024-07-15 13:04:24.771298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.577 [2024-07-15 13:04:24.771327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.577 qpair failed and we were unable to recover it. 00:25:06.835 [2024-07-15 13:04:24.781169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.835 [2024-07-15 13:04:24.781275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.835 [2024-07-15 13:04:24.781301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.835 [2024-07-15 13:04:24.781317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.835 [2024-07-15 13:04:24.781330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.835 [2024-07-15 13:04:24.781360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.835 qpair failed and we were unable to recover it. 00:25:06.835 [2024-07-15 13:04:24.791284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.835 [2024-07-15 13:04:24.791399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.835 [2024-07-15 13:04:24.791435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.835 [2024-07-15 13:04:24.791450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.835 [2024-07-15 13:04:24.791463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.835 [2024-07-15 13:04:24.791491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.835 qpair failed and we were unable to recover it. 00:25:06.835 [2024-07-15 13:04:24.801244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.835 [2024-07-15 13:04:24.801385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.835 [2024-07-15 13:04:24.801410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.835 [2024-07-15 13:04:24.801425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.835 [2024-07-15 13:04:24.801437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.835 [2024-07-15 13:04:24.801467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.835 qpair failed and we were unable to recover it. 00:25:06.835 [2024-07-15 13:04:24.811229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.835 [2024-07-15 13:04:24.811318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.835 [2024-07-15 13:04:24.811344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.835 [2024-07-15 13:04:24.811359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.835 [2024-07-15 13:04:24.811371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.835 [2024-07-15 13:04:24.811401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.835 qpair failed and we were unable to recover it. 00:25:06.835 [2024-07-15 13:04:24.821290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.821407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.821432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.821447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.821459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.821487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.831317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.831411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.831437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.831451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.831468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.831498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.841353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.841443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.841468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.841483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.841495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.841524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.851395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.851482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.851507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.851522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.851534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.851563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.861408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.861501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.861526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.861541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.861554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.861584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.871410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.871550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.871575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.871590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.871603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.871631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.881431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.881522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.881548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.881563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.881575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.881604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.891474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.891568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.891594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.891609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.891621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.891650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.901510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.901610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.901635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.901650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.901663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.901691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.911507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.911628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.911653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.911667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.911680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.911709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.921644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.921760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.921786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.921806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.921820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.921850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.931588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.931682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.931706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.931743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.931758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.931790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.941687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.941813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.941840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.941857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.941870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.941900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.951629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.951733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.951766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.951782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.951794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.836 [2024-07-15 13:04:24.951825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.836 qpair failed and we were unable to recover it. 00:25:06.836 [2024-07-15 13:04:24.961700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.836 [2024-07-15 13:04:24.961835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.836 [2024-07-15 13:04:24.961861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.836 [2024-07-15 13:04:24.961876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.836 [2024-07-15 13:04:24.961889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:24.961918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:24.971728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:24.971877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:24.971903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:24.971919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:24.971931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:24.971972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:24.981799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:24.981919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:24.981946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:24.981961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:24.981974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:24.982005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:24.991911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:24.992016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:24.992057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:24.992072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:24.992085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:24.992126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:25.001861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:25.001960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:25.001986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:25.002002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:25.002015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:25.002060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:25.011907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:25.012074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:25.012099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:25.012120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:25.012138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:25.012166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:25.021943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:25.022047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:25.022087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:25.022103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:25.022115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:25.022144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:06.837 [2024-07-15 13:04:25.031899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.837 [2024-07-15 13:04:25.032033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.837 [2024-07-15 13:04:25.032074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.837 [2024-07-15 13:04:25.032089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.837 [2024-07-15 13:04:25.032101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:06.837 [2024-07-15 13:04:25.032141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:06.837 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.041950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.042042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.042068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.042099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.042111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.042141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.052022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.052131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.052156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.052171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.052183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.052211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.062071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.062187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.062212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.062227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.062240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.062269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.072044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.072153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.072178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.072194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.072206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.072234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.082131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.082241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.082268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.082283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.082296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.082324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.092099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.092189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.092214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.092229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.092241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.092271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.102130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.096 [2024-07-15 13:04:25.102229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.096 [2024-07-15 13:04:25.102260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.096 [2024-07-15 13:04:25.102276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.096 [2024-07-15 13:04:25.102289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.096 [2024-07-15 13:04:25.102317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.096 qpair failed and we were unable to recover it. 00:25:07.096 [2024-07-15 13:04:25.112134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.112226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.112252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.112267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.112279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.112308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.122175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.122268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.122293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.122307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.122320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.122348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.132214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.132306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.132331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.132345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.132357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.132386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.142209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.142337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.142363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.142378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.142390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.142424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.152307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.152402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.152426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.152441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.152454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.152482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.162258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.162356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.162380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.162395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.162408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.162436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.172314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.172455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.172481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.172496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.172508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.172546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.182345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.182455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.182480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.182495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.182507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.182536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.192385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.192476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.192506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.192522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.192534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.192563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.202351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.202492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.202517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.202532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.202544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.202574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.212408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.212507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.212532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.212547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.212559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.212588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.222531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.222640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.222665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.222679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.222691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.222721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.232432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.232554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.232581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.232596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.232614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.232645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.242525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.242663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.097 [2024-07-15 13:04:25.242690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.097 [2024-07-15 13:04:25.242705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.097 [2024-07-15 13:04:25.242731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.097 [2024-07-15 13:04:25.242784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.097 qpair failed and we were unable to recover it. 00:25:07.097 [2024-07-15 13:04:25.252514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.097 [2024-07-15 13:04:25.252624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.098 [2024-07-15 13:04:25.252649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.098 [2024-07-15 13:04:25.252664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.098 [2024-07-15 13:04:25.252676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.098 [2024-07-15 13:04:25.252705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.098 qpair failed and we were unable to recover it. 00:25:07.098 [2024-07-15 13:04:25.262544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.098 [2024-07-15 13:04:25.262682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.098 [2024-07-15 13:04:25.262707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.098 [2024-07-15 13:04:25.262746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.098 [2024-07-15 13:04:25.262761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.098 [2024-07-15 13:04:25.262798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.098 qpair failed and we were unable to recover it. 00:25:07.098 [2024-07-15 13:04:25.272599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.098 [2024-07-15 13:04:25.272734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.098 [2024-07-15 13:04:25.272784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.098 [2024-07-15 13:04:25.272800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.098 [2024-07-15 13:04:25.272813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.098 [2024-07-15 13:04:25.272843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.098 qpair failed and we were unable to recover it. 00:25:07.098 [2024-07-15 13:04:25.282592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.098 [2024-07-15 13:04:25.282685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.098 [2024-07-15 13:04:25.282709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.098 [2024-07-15 13:04:25.282745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.098 [2024-07-15 13:04:25.282760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.098 [2024-07-15 13:04:25.282802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.098 qpair failed and we were unable to recover it. 00:25:07.098 [2024-07-15 13:04:25.292572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.098 [2024-07-15 13:04:25.292680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.098 [2024-07-15 13:04:25.292705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.098 [2024-07-15 13:04:25.292735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.098 [2024-07-15 13:04:25.292755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.098 [2024-07-15 13:04:25.292797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.098 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.302684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.302842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.302868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.302884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.302897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.302927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.312683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.312805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.312830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.312845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.312858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.312887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.322708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.322874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.322899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.322919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.322933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.322965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.332707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.332826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.332853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.332869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.332882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.332913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.342785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.342885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.342911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.342927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.342940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.342970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.352762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.352867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.352892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.352908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.352920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.352951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.362832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.362930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.362955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.362970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.357 [2024-07-15 13:04:25.362983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.357 [2024-07-15 13:04:25.363014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.357 qpair failed and we were unable to recover it. 00:25:07.357 [2024-07-15 13:04:25.372854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.357 [2024-07-15 13:04:25.372948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.357 [2024-07-15 13:04:25.372974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.357 [2024-07-15 13:04:25.372990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.373002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.373047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.382957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.383111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.383149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.383164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.383177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.383208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.392905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.393024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.393064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.393080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.393092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.393121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.402955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.403069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.403093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.403108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.403120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.403150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.412986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.413118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.413142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.413162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.413175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.413205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.423060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.423164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.423189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.423205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.423217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.423247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.433060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.433154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.433177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.433193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.433205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.433235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.443057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.443201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.443226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.443241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.443254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.443283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.453074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.453229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.453255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.453270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.453282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.453311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.463119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.463216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.463241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.463256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.463269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.463299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.473178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.473284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.473309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.473324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.473338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.473367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.483138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.483231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.483257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.483273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.483285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.483314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.493151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.493240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.358 [2024-07-15 13:04:25.493266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.358 [2024-07-15 13:04:25.493281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.358 [2024-07-15 13:04:25.493294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.358 [2024-07-15 13:04:25.493323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.358 qpair failed and we were unable to recover it. 00:25:07.358 [2024-07-15 13:04:25.503294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.358 [2024-07-15 13:04:25.503400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.503436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.503451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.503463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.503502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.359 [2024-07-15 13:04:25.513267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.359 [2024-07-15 13:04:25.513365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.513389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.513404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.513417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.513445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.359 [2024-07-15 13:04:25.523286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.359 [2024-07-15 13:04:25.523406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.523430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.523444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.523457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.523486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.359 [2024-07-15 13:04:25.533361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.359 [2024-07-15 13:04:25.533466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.533492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.533508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.533520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.533549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.359 [2024-07-15 13:04:25.543333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.359 [2024-07-15 13:04:25.543435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.543461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.543476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.543489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.543522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.359 [2024-07-15 13:04:25.553392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.359 [2024-07-15 13:04:25.553493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.359 [2024-07-15 13:04:25.553518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.359 [2024-07-15 13:04:25.553533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.359 [2024-07-15 13:04:25.553546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.359 [2024-07-15 13:04:25.553575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.359 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.563423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.563560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.563586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.563601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.563614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.563644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.573366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.573457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.573482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.573497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.573509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.573538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.583454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.583553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.583576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.583591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.583604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.583633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.593470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.593603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.593633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.593648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.593660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.593690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.603541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.603679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.603705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.603720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.603732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.603785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.613535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.613628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.613652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.613667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.613679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.613708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.623625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.623778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.623804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.623820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.623833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.623863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.633636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.633785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.633810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.633825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.633842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.633875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.643657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.643774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.643801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.643817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.643830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.643869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.653637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.653763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.653790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.653806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.653819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.653856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.663681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.663850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.663876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.663891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.663904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.663935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.673771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.673872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.673898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.673913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.673926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.673956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.683752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.683862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.683888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.683904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.683916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.683946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.693811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.693912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.693937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.693952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.693965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.693995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.703834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.703980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.704007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.704039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.704052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.704082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.713861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.713963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.713987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.714003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.714015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.714060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.723835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.723999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.724025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.724040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.724058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.724090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.733909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.734004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.734042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.734059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.734071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.734100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.743905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.619 [2024-07-15 13:04:25.744068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.619 [2024-07-15 13:04:25.744092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.619 [2024-07-15 13:04:25.744107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.619 [2024-07-15 13:04:25.744120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.619 [2024-07-15 13:04:25.744150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.619 qpair failed and we were unable to recover it. 00:25:07.619 [2024-07-15 13:04:25.753939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.754129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.754155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.754170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.754184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.754214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.763973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.764066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.764107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.764122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.764134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.764173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.773987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.774095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.774120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.774135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.774147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.774177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.784136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.784258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.784282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.784296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.784309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.784338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.794076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.794173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.794196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.794211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.794223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.794263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.804110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.804204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.804228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.804243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.804255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.804285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.814054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.620 [2024-07-15 13:04:25.814164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.620 [2024-07-15 13:04:25.814188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.620 [2024-07-15 13:04:25.814208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.620 [2024-07-15 13:04:25.814221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.620 [2024-07-15 13:04:25.814250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.620 qpair failed and we were unable to recover it. 00:25:07.620 [2024-07-15 13:04:25.824129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.824228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.824253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.824270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.824283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.824313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.834145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.834282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.834307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.834322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.834334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.834364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.844179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.844278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.844302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.844317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.844330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.844358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.854199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.854294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.854318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.854333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.854346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.854375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.864236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.864355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.864379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.864393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.864405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.864435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.874276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.874373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.874397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.874412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.874424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.874453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.884248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.884342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.884366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.884380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.884393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.884422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.894365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.894461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.894485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.894500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.894512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.894542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.904359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.880 [2024-07-15 13:04:25.904456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.880 [2024-07-15 13:04:25.904485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.880 [2024-07-15 13:04:25.904501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.880 [2024-07-15 13:04:25.904514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.880 [2024-07-15 13:04:25.904543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.880 qpair failed and we were unable to recover it. 00:25:07.880 [2024-07-15 13:04:25.914356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.914457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.914480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.914496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.914508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.914537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.924377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.924468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.924492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.924507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.924519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.924548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.934457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.934552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.934576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.934591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.934604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.934633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.944424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.944523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.944547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.944562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.944574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.944609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.954459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.954557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.954581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.954596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.954608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.954637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.964467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.964563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.964587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.964602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.964614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.964643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.974508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.974648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.974689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.974704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.974716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.974757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.984553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.984654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.984679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.984694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.984707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.984759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:25.994547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:25.994639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:25.994669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:25.994686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:25.994698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:25.994750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.004563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.004665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.004691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:26.004707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:26.004734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:26.004773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.014630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.014747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.014774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:26.014790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:26.014802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:26.014833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.024748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.024857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.024884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:26.024899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:26.024912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:26.024942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.034682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.034812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.034837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:26.034852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:26.034870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:26.034902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.044781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.044890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.044915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.881 [2024-07-15 13:04:26.044931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.881 [2024-07-15 13:04:26.044944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.881 [2024-07-15 13:04:26.044974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.881 qpair failed and we were unable to recover it. 00:25:07.881 [2024-07-15 13:04:26.054845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.881 [2024-07-15 13:04:26.054968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.881 [2024-07-15 13:04:26.054993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.882 [2024-07-15 13:04:26.055008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.882 [2024-07-15 13:04:26.055036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.882 [2024-07-15 13:04:26.055067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.882 qpair failed and we were unable to recover it. 00:25:07.882 [2024-07-15 13:04:26.064804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.882 [2024-07-15 13:04:26.064910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.882 [2024-07-15 13:04:26.064935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.882 [2024-07-15 13:04:26.064950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.882 [2024-07-15 13:04:26.064963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.882 [2024-07-15 13:04:26.064994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.882 qpair failed and we were unable to recover it. 00:25:07.882 [2024-07-15 13:04:26.074815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.882 [2024-07-15 13:04:26.074941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.882 [2024-07-15 13:04:26.074968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.882 [2024-07-15 13:04:26.074983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.882 [2024-07-15 13:04:26.074996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.882 [2024-07-15 13:04:26.075053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.882 qpair failed and we were unable to recover it. 00:25:07.882 [2024-07-15 13:04:26.084820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.882 [2024-07-15 13:04:26.084929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.882 [2024-07-15 13:04:26.084954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.882 [2024-07-15 13:04:26.084970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.882 [2024-07-15 13:04:26.084983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:07.882 [2024-07-15 13:04:26.085027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.882 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.094837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.094967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.094992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.095008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.095021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.095066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.104895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.104998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.105037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.105052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.105065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.105094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.114972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.115103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.115127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.115141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.115154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.115183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.124970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.125118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.125143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.125157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.125175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.125207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.135008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.135115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.135139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.135154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.135167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.135196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.145048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.145146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.145170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.145185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.145197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.145227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.155096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.155227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.155251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.155266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.155278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.155307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.165147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.165261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.165286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.165301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.165313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.165341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.175111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.175207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.175231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.175247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.175259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.175288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.185131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.185241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.185265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.185279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.185292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.185321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.195145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.195241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.195266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.195282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.195294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.195324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.205166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.205284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.205310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.205325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.205337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.205367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.215195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.215289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.215314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.215338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.141 [2024-07-15 13:04:26.215352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.141 [2024-07-15 13:04:26.215381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.141 qpair failed and we were unable to recover it. 00:25:08.141 [2024-07-15 13:04:26.225228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.141 [2024-07-15 13:04:26.225334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.141 [2024-07-15 13:04:26.225358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.141 [2024-07-15 13:04:26.225373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.225386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.225417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.235217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.235347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.235373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.235387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.235400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.235429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.245252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.245349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.245374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.245389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.245401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.245430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.255281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.255384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.255409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.255423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.255436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.255464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.265369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.265520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.265545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.265560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.265572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.265601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.275334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.275453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.275478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.275494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.275506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.275535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.285444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.285549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.285573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.285587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.285600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.285629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.295491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.295587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.295613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.295628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.295641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.295670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.305531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.305655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.305685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.305702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.305728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.305768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.315436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.315539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.315564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.315580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.315592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.315621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.325460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.325549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.325574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.325589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.325601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.325631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.335584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.335689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.335729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.335760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.335774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.335805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.142 [2024-07-15 13:04:26.345568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.142 [2024-07-15 13:04:26.345678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.142 [2024-07-15 13:04:26.345703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.142 [2024-07-15 13:04:26.345719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.142 [2024-07-15 13:04:26.345732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.142 [2024-07-15 13:04:26.345778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.142 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.355555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.355649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.355674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.355689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.355701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.355756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.365701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.365819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.365845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.365860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.365873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.365902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.375628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.375743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.375771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.375787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.375800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.375830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.385665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.385782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.385809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.385825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.385838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.385869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.395650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.395786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.395818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.395834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.395847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.395877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.405748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.405839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.405864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.405879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.405891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.405922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.415742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.415848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.415875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.415890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.415903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.415934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.425767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.425870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.425896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.425911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.425923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.425953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.435820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.435954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.435980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.435995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.436008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.436058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.445825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.445920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.445944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.445959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.445971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.446001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.455931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.456055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.456080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.456095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.456107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.456136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.465899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.466016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.466057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.466073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.466085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.466115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.475997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.476150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.476174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.476190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.476202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.476233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.485968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.486084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.486107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.486122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.486134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.486163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.495976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.496084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.496107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.496122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.496134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.496163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.506039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.506148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.506173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.506188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.506200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.506228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.516023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.516137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.516163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.516178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.516190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.516220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.526077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.526177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.526202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.526217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.526234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.526264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.536084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.536176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.536199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.536214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.536226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.536255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.546119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.546215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.546240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.546255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.546268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.546297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.556133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.401 [2024-07-15 13:04:26.556230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.401 [2024-07-15 13:04:26.556253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.401 [2024-07-15 13:04:26.556267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.401 [2024-07-15 13:04:26.556279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.401 [2024-07-15 13:04:26.556309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.401 qpair failed and we were unable to recover it. 00:25:08.401 [2024-07-15 13:04:26.566273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.402 [2024-07-15 13:04:26.566412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.402 [2024-07-15 13:04:26.566438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.402 [2024-07-15 13:04:26.566453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.402 [2024-07-15 13:04:26.566465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.402 [2024-07-15 13:04:26.566494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.402 qpair failed and we were unable to recover it. 00:25:08.402 [2024-07-15 13:04:26.576207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.402 [2024-07-15 13:04:26.576306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.402 [2024-07-15 13:04:26.576329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.402 [2024-07-15 13:04:26.576344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.402 [2024-07-15 13:04:26.576357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.402 [2024-07-15 13:04:26.576385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.402 qpair failed and we were unable to recover it. 00:25:08.402 [2024-07-15 13:04:26.586229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.402 [2024-07-15 13:04:26.586323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.402 [2024-07-15 13:04:26.586346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.402 [2024-07-15 13:04:26.586360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.402 [2024-07-15 13:04:26.586372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.402 [2024-07-15 13:04:26.586401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.402 qpair failed and we were unable to recover it. 00:25:08.402 [2024-07-15 13:04:26.596325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.402 [2024-07-15 13:04:26.596461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.402 [2024-07-15 13:04:26.596487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.402 [2024-07-15 13:04:26.596502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.402 [2024-07-15 13:04:26.596514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.402 [2024-07-15 13:04:26.596543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.402 qpair failed and we were unable to recover it. 00:25:08.402 [2024-07-15 13:04:26.606264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.660 [2024-07-15 13:04:26.606378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.660 [2024-07-15 13:04:26.606402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.660 [2024-07-15 13:04:26.606417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.660 [2024-07-15 13:04:26.606430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.660 [2024-07-15 13:04:26.606459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.660 qpair failed and we were unable to recover it. 00:25:08.660 [2024-07-15 13:04:26.616297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.660 [2024-07-15 13:04:26.616393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.660 [2024-07-15 13:04:26.616417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.660 [2024-07-15 13:04:26.616437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.660 [2024-07-15 13:04:26.616450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.660 [2024-07-15 13:04:26.616480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.626378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.626478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.626503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.626518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.626531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.626560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.636402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.636504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.636530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.636545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.636558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.636586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.646389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.646488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.646513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.646527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.646540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.646569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.656419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.656529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.656554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.656569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.656581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.656611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.666493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.666592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.666618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.666633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.666645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.666674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.676516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.676643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.676668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.676683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.676696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.676759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.686495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.686619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.686644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.686660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.686672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.686701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.696520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.696630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.696655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.696671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.696683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.696713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.706607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.706707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.706754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.706777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.706791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.706834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.716623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.716753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.716779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.716795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.716808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.716838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.726656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.726771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.726796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.726811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.726823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.726853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.736649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.736773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.736798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.736813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.736825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.736856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.746687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.746820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.746847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.746862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.746875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.746906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.756757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.756882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.756908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.756923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.756936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.756966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.766800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.766908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.766933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.766948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.766961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.766991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.776826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.776933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.776958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.776972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.776984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.777014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.786832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.786935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.786962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.786977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.786990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.787020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.796842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.796952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.796983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.796999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.797012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.797057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.661 qpair failed and we were unable to recover it. 00:25:08.661 [2024-07-15 13:04:26.806897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.661 [2024-07-15 13:04:26.807045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.661 [2024-07-15 13:04:26.807070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.661 [2024-07-15 13:04:26.807085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.661 [2024-07-15 13:04:26.807097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.661 [2024-07-15 13:04:26.807133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.662 [2024-07-15 13:04:26.816961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.662 [2024-07-15 13:04:26.817080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.662 [2024-07-15 13:04:26.817106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.662 [2024-07-15 13:04:26.817120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.662 [2024-07-15 13:04:26.817133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.662 [2024-07-15 13:04:26.817161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.662 [2024-07-15 13:04:26.827000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.662 [2024-07-15 13:04:26.827110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.662 [2024-07-15 13:04:26.827136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.662 [2024-07-15 13:04:26.827151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.662 [2024-07-15 13:04:26.827163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.662 [2024-07-15 13:04:26.827200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.662 [2024-07-15 13:04:26.837037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.662 [2024-07-15 13:04:26.837137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.662 [2024-07-15 13:04:26.837161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.662 [2024-07-15 13:04:26.837175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.662 [2024-07-15 13:04:26.837187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.662 [2024-07-15 13:04:26.837222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.662 [2024-07-15 13:04:26.846989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.662 [2024-07-15 13:04:26.847100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.662 [2024-07-15 13:04:26.847126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.662 [2024-07-15 13:04:26.847141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.662 [2024-07-15 13:04:26.847153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.662 [2024-07-15 13:04:26.847182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.662 [2024-07-15 13:04:26.857072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.662 [2024-07-15 13:04:26.857171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.662 [2024-07-15 13:04:26.857196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.662 [2024-07-15 13:04:26.857212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.662 [2024-07-15 13:04:26.857223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.662 [2024-07-15 13:04:26.857252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.662 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.867019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.867126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.867153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.867168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.867181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.867211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.877105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.877206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.877231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.877246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.877258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.877287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.887094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.887193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.887224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.887240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.887252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.887282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.897152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.897250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.897276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.897291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.897303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.897332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.907275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.907377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.907402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.907417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.907430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.907458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.917245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.917348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.917372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.917387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.917399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.917427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.927287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.927377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.927403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.927417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.927435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.921 [2024-07-15 13:04:26.927465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.921 qpair failed and we were unable to recover it. 00:25:08.921 [2024-07-15 13:04:26.937245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.921 [2024-07-15 13:04:26.937344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.921 [2024-07-15 13:04:26.937368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.921 [2024-07-15 13:04:26.937382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.921 [2024-07-15 13:04:26.937395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.937423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.947284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.947416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.947441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.947456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.947469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.947498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.957309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.957411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.957437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.957451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.957464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.957493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.967384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.967472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.967497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.967512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.967525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.967554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.977316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.977433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.977457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.977472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.977484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.977515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.987396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.987504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.987527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.987542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.987554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.987584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:26.997384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:26.997513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:26.997539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:26.997555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:26.997567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:26.997596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.007448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.007557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.007582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.007596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.007609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.007637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.017536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.017639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.017669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.017689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.017703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.017764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.027517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.027624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.027649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.027664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.027677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.027706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.037542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.037644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.037669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.037684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.037696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.037758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.047598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.047696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.047720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.047734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.047771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.047802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.057560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.057660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.057684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.057699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.057711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.057772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.067733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.067857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.067883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.067902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.067915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.922 [2024-07-15 13:04:27.067944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.922 qpair failed and we were unable to recover it. 00:25:08.922 [2024-07-15 13:04:27.077653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.922 [2024-07-15 13:04:27.077767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.922 [2024-07-15 13:04:27.077792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.922 [2024-07-15 13:04:27.077806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.922 [2024-07-15 13:04:27.077819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.923 [2024-07-15 13:04:27.077849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.923 qpair failed and we were unable to recover it. 00:25:08.923 [2024-07-15 13:04:27.087682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.923 [2024-07-15 13:04:27.087817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.923 [2024-07-15 13:04:27.087843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.923 [2024-07-15 13:04:27.087859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.923 [2024-07-15 13:04:27.087871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.923 [2024-07-15 13:04:27.087912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.923 qpair failed and we were unable to recover it. 00:25:08.923 [2024-07-15 13:04:27.097771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.923 [2024-07-15 13:04:27.097872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.923 [2024-07-15 13:04:27.097908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.923 [2024-07-15 13:04:27.097924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.923 [2024-07-15 13:04:27.097936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.923 [2024-07-15 13:04:27.097966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.923 qpair failed and we were unable to recover it. 00:25:08.923 [2024-07-15 13:04:27.107779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.923 [2024-07-15 13:04:27.107907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.923 [2024-07-15 13:04:27.107932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.923 [2024-07-15 13:04:27.107954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.923 [2024-07-15 13:04:27.107967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.923 [2024-07-15 13:04:27.107997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.923 qpair failed and we were unable to recover it. 00:25:08.923 [2024-07-15 13:04:27.117817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:08.923 [2024-07-15 13:04:27.117915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:08.923 [2024-07-15 13:04:27.117939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:08.923 [2024-07-15 13:04:27.117954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:08.923 [2024-07-15 13:04:27.117967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:08.923 [2024-07-15 13:04:27.117997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:08.923 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.127851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.183 [2024-07-15 13:04:27.127947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.183 [2024-07-15 13:04:27.127974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.183 [2024-07-15 13:04:27.127990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.183 [2024-07-15 13:04:27.128002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.183 [2024-07-15 13:04:27.128032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.183 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.137885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.183 [2024-07-15 13:04:27.138026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.183 [2024-07-15 13:04:27.138063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.183 [2024-07-15 13:04:27.138079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.183 [2024-07-15 13:04:27.138092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.183 [2024-07-15 13:04:27.138122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.183 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.147903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.183 [2024-07-15 13:04:27.148005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.183 [2024-07-15 13:04:27.148046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.183 [2024-07-15 13:04:27.148062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.183 [2024-07-15 13:04:27.148074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.183 [2024-07-15 13:04:27.148103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.183 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.157877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.183 [2024-07-15 13:04:27.157972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.183 [2024-07-15 13:04:27.157997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.183 [2024-07-15 13:04:27.158012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.183 [2024-07-15 13:04:27.158039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.183 [2024-07-15 13:04:27.158069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.183 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.167949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.183 [2024-07-15 13:04:27.168082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.183 [2024-07-15 13:04:27.168108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.183 [2024-07-15 13:04:27.168122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.183 [2024-07-15 13:04:27.168135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.183 [2024-07-15 13:04:27.168164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.183 qpair failed and we were unable to recover it. 00:25:09.183 [2024-07-15 13:04:27.177958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.178071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.178094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.178109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.178122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.178151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.187990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.188100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.188124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.188139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.188151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.188181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.198030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.198131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.198160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.198176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.198188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.198218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.208085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.208191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.208217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.208232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.208244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.208282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.218120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.218237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.218261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.218277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.218289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.218319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.228105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.228204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.228228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.228243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.228272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.228303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.238107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.238201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.238225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.238240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.238253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.238288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.248109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.248203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.248228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.248243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.248255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.248285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.258172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.258269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.258292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.258307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.258319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.258349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.268219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.268315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.268338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.268353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.268365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.268394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.278230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.278330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.278354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.278369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.278381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.278411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.288298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.288434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.288465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.288481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.288506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.288535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.298340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.298435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.298459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.298473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.298489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.298519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.308361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.308488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.184 [2024-07-15 13:04:27.308512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.184 [2024-07-15 13:04:27.308527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.184 [2024-07-15 13:04:27.308539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.184 [2024-07-15 13:04:27.308569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.184 qpair failed and we were unable to recover it. 00:25:09.184 [2024-07-15 13:04:27.318300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.184 [2024-07-15 13:04:27.318397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.318420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.318435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.318447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.318477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.328359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.328452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.328476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.328490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.328508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.328539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.338433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.338529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.338554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.338568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.338580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.338610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.348433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.348534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.348558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.348573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.348586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.348615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.358486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.358582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.358605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.358619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.358632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.358663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.368488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.368578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.368601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.368616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.368628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.368657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.378560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.185 [2024-07-15 13:04:27.378681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.185 [2024-07-15 13:04:27.378705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.185 [2024-07-15 13:04:27.378721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.185 [2024-07-15 13:04:27.378733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.185 [2024-07-15 13:04:27.378772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.185 qpair failed and we were unable to recover it. 00:25:09.185 [2024-07-15 13:04:27.388596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.447 [2024-07-15 13:04:27.388718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.447 [2024-07-15 13:04:27.388765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.447 [2024-07-15 13:04:27.388783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.447 [2024-07-15 13:04:27.388797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.388828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.398538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.398634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.398657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.398671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.398683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.398735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.408570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.408662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.408686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.408702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.408716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.408770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.418567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.418656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.418680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.418695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.418713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.418766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.428717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.428825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.428849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.428864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.428877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.428907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.438655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.438769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.438794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.438809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.438821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.438852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.448672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.448784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.448809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.448824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.448837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.448867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.458693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.458815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.458840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.458855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.458868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.458899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.468794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.468926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.468952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.468968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.468981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.469012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.478790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.478887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.478911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.478926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.478939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.478969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.488796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.488899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.488923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.488938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.488951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.488981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.498809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.498904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.498928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.498943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.498956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.498986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.508906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.509047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.509087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.509108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.509121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.448 [2024-07-15 13:04:27.509151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.448 qpair failed and we were unable to recover it. 00:25:09.448 [2024-07-15 13:04:27.518916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.448 [2024-07-15 13:04:27.519016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.448 [2024-07-15 13:04:27.519054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.448 [2024-07-15 13:04:27.519069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.448 [2024-07-15 13:04:27.519081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.519110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.528901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.528995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.529020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.529049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.529062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.529091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.538937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.539049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.539075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.539090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.539103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.539132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.548958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.549076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.549099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.549114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.549126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.549156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.558988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.559106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.559130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.559144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.559157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.559186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.569024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.569132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.569157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.569172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.569185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.569225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.579129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.579243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.579274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.579289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.579301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.579330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.589126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.589234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.589257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.589271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.589284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.589313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.599137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.599230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.599258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.599273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.599286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.599315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.609154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.609252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.609277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.609292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.609304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.609334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.619218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.619357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.619382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.619397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.619410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.619439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.629231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.629326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.629349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.629364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.629376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.449 [2024-07-15 13:04:27.629405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.449 qpair failed and we were unable to recover it. 00:25:09.449 [2024-07-15 13:04:27.639263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.449 [2024-07-15 13:04:27.639372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.449 [2024-07-15 13:04:27.639398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.449 [2024-07-15 13:04:27.639414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.449 [2024-07-15 13:04:27.639426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.450 [2024-07-15 13:04:27.639463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.450 qpair failed and we were unable to recover it. 00:25:09.450 [2024-07-15 13:04:27.649280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.450 [2024-07-15 13:04:27.649401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.450 [2024-07-15 13:04:27.649427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.450 [2024-07-15 13:04:27.649443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.450 [2024-07-15 13:04:27.649456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.450 [2024-07-15 13:04:27.649486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.450 qpair failed and we were unable to recover it. 00:25:09.710 [2024-07-15 13:04:27.659402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.710 [2024-07-15 13:04:27.659535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.710 [2024-07-15 13:04:27.659561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.710 [2024-07-15 13:04:27.659577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.710 [2024-07-15 13:04:27.659590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.710 [2024-07-15 13:04:27.659620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.710 qpair failed and we were unable to recover it. 00:25:09.710 [2024-07-15 13:04:27.669350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.710 [2024-07-15 13:04:27.669450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.710 [2024-07-15 13:04:27.669474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.710 [2024-07-15 13:04:27.669489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.710 [2024-07-15 13:04:27.669502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.710 [2024-07-15 13:04:27.669542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.710 qpair failed and we were unable to recover it. 00:25:09.710 [2024-07-15 13:04:27.679358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.710 [2024-07-15 13:04:27.679468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.679492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.679506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.679518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.679547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.689409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.689504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.689534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.689549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.689562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.689591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.699463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.699556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.699580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.699595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.699607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.699637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.709451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.709549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.709573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.709588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.709600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.709630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.719469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.719567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.719590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.719604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.719617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.719646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.729512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.729689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.729714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.729729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.729757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.729791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.739477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.739597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.739623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.739638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.739651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.739680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.749607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.749760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.749787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.749803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.749816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.749846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.759617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.759711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.759759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.759775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.759788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.759819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.769556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.769651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.769675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.769690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.769702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.769754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.779681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.779825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.779849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.779864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.779877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.779907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.789661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.789817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.789842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.789857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.789870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.789901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.799773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.799882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.799908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.799923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.799935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.799966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.809790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.809891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.809918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.809933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.809946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.711 [2024-07-15 13:04:27.809976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.711 qpair failed and we were unable to recover it. 00:25:09.711 [2024-07-15 13:04:27.819704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.711 [2024-07-15 13:04:27.819857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.711 [2024-07-15 13:04:27.819883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.711 [2024-07-15 13:04:27.819899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.711 [2024-07-15 13:04:27.819917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.819948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.829766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.829865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.829889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.829905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.829918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.829949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.839774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.839884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.839910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.839926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.839939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.839970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.849775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.849874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.849900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.849916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.849929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.849959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.859811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.859906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.859931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.859946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.859958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.859988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.869912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.870034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.870059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.870075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.870087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.870117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.879881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.879979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.880015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.880046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.880060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.880090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.889947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.890063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.890087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.890101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.890114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.890144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.899952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.900077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.900101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.900116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.900129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.900158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.712 [2024-07-15 13:04:27.909983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.712 [2024-07-15 13:04:27.910113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.712 [2024-07-15 13:04:27.910137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.712 [2024-07-15 13:04:27.910159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.712 [2024-07-15 13:04:27.910172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.712 [2024-07-15 13:04:27.910202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.712 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.919986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.920120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.920145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.920160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.920187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.920217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.930084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.930228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.930252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.930266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.930278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.930307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.940144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.940269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.940292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.940308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.940320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.940350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.950130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.950228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.950252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.950267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.950279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.950310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.960157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.960275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.960299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.960314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.960326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.960356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.970165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.970261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.971 [2024-07-15 13:04:27.970285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.971 [2024-07-15 13:04:27.970300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.971 [2024-07-15 13:04:27.970312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.971 [2024-07-15 13:04:27.970342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.971 qpair failed and we were unable to recover it. 00:25:09.971 [2024-07-15 13:04:27.980260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.971 [2024-07-15 13:04:27.980357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:27.980382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:27.980412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:27.980425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:27.980456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:27.990246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:27.990343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:27.990367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:27.990382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:27.990394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:27.990424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.000255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.000355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.000384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.000405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.000417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.000446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.010365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.010460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.010484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.010498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.010511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.010540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.020313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.020404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.020428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.020443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.020455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.020485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.030349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.030447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.030472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.030487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.030500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.030529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.040330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.040432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.040456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.040471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.040484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.040518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.050416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.050508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.050532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.050546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.050559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.050587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.060469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.060600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.060626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.060641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.060654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.060689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.070414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.070511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.070535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.070549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.070562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.070591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.080526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.080625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.080649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.080664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.080676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.080706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.090542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.090637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.090665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.090680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.090693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.090722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.100479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.100573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.100598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.100612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.100624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.100655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.110556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.110652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.110675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.110689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.972 [2024-07-15 13:04:28.110702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.972 [2024-07-15 13:04:28.110755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.972 qpair failed and we were unable to recover it. 00:25:09.972 [2024-07-15 13:04:28.120569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.972 [2024-07-15 13:04:28.120707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.972 [2024-07-15 13:04:28.120754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.972 [2024-07-15 13:04:28.120771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.120784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.120814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:09.973 [2024-07-15 13:04:28.130639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.973 [2024-07-15 13:04:28.130780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.973 [2024-07-15 13:04:28.130805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.973 [2024-07-15 13:04:28.130820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.130833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.130869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:09.973 [2024-07-15 13:04:28.140585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.973 [2024-07-15 13:04:28.140672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.973 [2024-07-15 13:04:28.140696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.973 [2024-07-15 13:04:28.140710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.140744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.140787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:09.973 [2024-07-15 13:04:28.150623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.973 [2024-07-15 13:04:28.150744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.973 [2024-07-15 13:04:28.150769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.973 [2024-07-15 13:04:28.150784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.150797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.150828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:09.973 [2024-07-15 13:04:28.160650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.973 [2024-07-15 13:04:28.160768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.973 [2024-07-15 13:04:28.160793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.973 [2024-07-15 13:04:28.160809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.160821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.160851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:09.973 [2024-07-15 13:04:28.170674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:09.973 [2024-07-15 13:04:28.170825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:09.973 [2024-07-15 13:04:28.170852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:09.973 [2024-07-15 13:04:28.170867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:09.973 [2024-07-15 13:04:28.170880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:09.973 [2024-07-15 13:04:28.170910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:09.973 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.180760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.180859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.180884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.180898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.180911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.180941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.190773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.190916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.190943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.190958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.190971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.191001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.200857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.200956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.200982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.200998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.201010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.201055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.210807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.210906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.210931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.210946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.210958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.210989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.220836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.220926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.220950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.220965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.220983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.221013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.230880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.230983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.231006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.231021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.231034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.231064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.240896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.241017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.241057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.241072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.241084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.241113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.250929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.251061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.251087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.231 [2024-07-15 13:04:28.251102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.231 [2024-07-15 13:04:28.251114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.231 [2024-07-15 13:04:28.251143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.231 qpair failed and we were unable to recover it. 00:25:10.231 [2024-07-15 13:04:28.260983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.231 [2024-07-15 13:04:28.261099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.231 [2024-07-15 13:04:28.261123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.261138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.261150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.261180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.270993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.271104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.271128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.271142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.271154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.271184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.281037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.281131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.281155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.281169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.281182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.281210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.291017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.291136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.291162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.291177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.291189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.291218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.301077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.301181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.301206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.301222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.301234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.301263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.311094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.311191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.311215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.311234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.311248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.311276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.321160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.321254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.321278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.321292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.321304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.321333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.331143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.331260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.331285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.331300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.331313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.331342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.341165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.341263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.341289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.341304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.341316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.341345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.351291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.351392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.351417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.351432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.351444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.351473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.361332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.361441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.361467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.361482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.361494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.361523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.371275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.371368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.371391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.371406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.371418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.371447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.381290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.381394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.232 [2024-07-15 13:04:28.381419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.232 [2024-07-15 13:04:28.381434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.232 [2024-07-15 13:04:28.381447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.232 [2024-07-15 13:04:28.381475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.232 qpair failed and we were unable to recover it. 00:25:10.232 [2024-07-15 13:04:28.391311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.232 [2024-07-15 13:04:28.391406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.233 [2024-07-15 13:04:28.391430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.233 [2024-07-15 13:04:28.391445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.233 [2024-07-15 13:04:28.391457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.233 [2024-07-15 13:04:28.391486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.233 qpair failed and we were unable to recover it. 00:25:10.233 [2024-07-15 13:04:28.401350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.233 [2024-07-15 13:04:28.401458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.233 [2024-07-15 13:04:28.401482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.233 [2024-07-15 13:04:28.401501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.233 [2024-07-15 13:04:28.401514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.233 [2024-07-15 13:04:28.401543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.233 qpair failed and we were unable to recover it. 00:25:10.233 [2024-07-15 13:04:28.411360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.233 [2024-07-15 13:04:28.411452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.233 [2024-07-15 13:04:28.411476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.233 [2024-07-15 13:04:28.411491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.233 [2024-07-15 13:04:28.411503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.233 [2024-07-15 13:04:28.411532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.233 qpair failed and we were unable to recover it. 00:25:10.233 [2024-07-15 13:04:28.421397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.233 [2024-07-15 13:04:28.421488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.233 [2024-07-15 13:04:28.421513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.233 [2024-07-15 13:04:28.421527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.233 [2024-07-15 13:04:28.421540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.233 [2024-07-15 13:04:28.421569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.233 qpair failed and we were unable to recover it. 00:25:10.233 [2024-07-15 13:04:28.431559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.233 [2024-07-15 13:04:28.431689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.233 [2024-07-15 13:04:28.431714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.233 [2024-07-15 13:04:28.431730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.233 [2024-07-15 13:04:28.431766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.233 [2024-07-15 13:04:28.431798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.233 qpair failed and we were unable to recover it. 00:25:10.491 [2024-07-15 13:04:28.441481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.491 [2024-07-15 13:04:28.441599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.491 [2024-07-15 13:04:28.441623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.491 [2024-07-15 13:04:28.441638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.491 [2024-07-15 13:04:28.441651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.491 [2024-07-15 13:04:28.441696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.491 qpair failed and we were unable to recover it. 00:25:10.491 [2024-07-15 13:04:28.451518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.491 [2024-07-15 13:04:28.451608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.491 [2024-07-15 13:04:28.451632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.491 [2024-07-15 13:04:28.451647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.491 [2024-07-15 13:04:28.451659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.491 [2024-07-15 13:04:28.451688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.491 qpair failed and we were unable to recover it. 00:25:10.491 [2024-07-15 13:04:28.461518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.491 [2024-07-15 13:04:28.461612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.491 [2024-07-15 13:04:28.461635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.491 [2024-07-15 13:04:28.461649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.491 [2024-07-15 13:04:28.461661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.491 [2024-07-15 13:04:28.461690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.491 qpair failed and we were unable to recover it. 00:25:10.491 [2024-07-15 13:04:28.471569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.491 [2024-07-15 13:04:28.471663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.491 [2024-07-15 13:04:28.471687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.491 [2024-07-15 13:04:28.471702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.491 [2024-07-15 13:04:28.471714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.491 [2024-07-15 13:04:28.471767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.491 qpair failed and we were unable to recover it. 00:25:10.491 [2024-07-15 13:04:28.481600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.481713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.481745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.481763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.481776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.481807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.491609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.491701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.491750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.491767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.491780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.491811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.501655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.501772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.501796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.501812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.501824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.501855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.511691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.511821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.511847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.511863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.511875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.511905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.521695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.521823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.521850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.521865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.521878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.521908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.531743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.531841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.531865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.531881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.531893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.531928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.541799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.541932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.541958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.541974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.541986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.542016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.551805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.551918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.551945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.551961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.551975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.552005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.561832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.561943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.561970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.561986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.561999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.562044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.571842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.571964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.571990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.572006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.572019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.572048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.581979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.582102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.582132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.582148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.582160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.582189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.591937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.592035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.592061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.592091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.592103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.592132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.601951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.602051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.602075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.602089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.602101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.602130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.611975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.612082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.612106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.612121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.492 [2024-07-15 13:04:28.612133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.492 [2024-07-15 13:04:28.612162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.492 qpair failed and we were unable to recover it. 00:25:10.492 [2024-07-15 13:04:28.622003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.492 [2024-07-15 13:04:28.622112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.492 [2024-07-15 13:04:28.622136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.492 [2024-07-15 13:04:28.622150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.622168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.622197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.632131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.632229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.632253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.632268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.632280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.632309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.642092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.642190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.642215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.642230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.642243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.642272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.652155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.652304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.652328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.652343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.652356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.652385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.662136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.662223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.662248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.662263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.662275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.662303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.672224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.672356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.672382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.672397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.672409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.672437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.682215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.682307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.682332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.682346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.682359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.682387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.493 [2024-07-15 13:04:28.692222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.493 [2024-07-15 13:04:28.692318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.493 [2024-07-15 13:04:28.692343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.493 [2024-07-15 13:04:28.692358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.493 [2024-07-15 13:04:28.692371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.493 [2024-07-15 13:04:28.692399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.493 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.702230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.702325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.702351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.702366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.702379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.702409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.712290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.712395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.712419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.712439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.712453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.712483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.722364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.722458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.722483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.722498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.722511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.722540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.732364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.732469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.732494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.732509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.732522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.732551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.742372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.742461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.742487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.742502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.742514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.742543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.752380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.752478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.752503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.752518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.752530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.752559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.762418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.762532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.762557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.762572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.762584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.762613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.772428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.772524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.772549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.772563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.772576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.772605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.782458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.782549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.782572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.782586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.782598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.782626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.792555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.792669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.792694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.792708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.792736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.751 [2024-07-15 13:04:28.792776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.751 qpair failed and we were unable to recover it. 00:25:10.751 [2024-07-15 13:04:28.802534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.751 [2024-07-15 13:04:28.802630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.751 [2024-07-15 13:04:28.802656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.751 [2024-07-15 13:04:28.802676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.751 [2024-07-15 13:04:28.802689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.802732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.812633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.812758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.812785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.812800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.812813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.812844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.822585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.822676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.822701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.822732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.822755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.822786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.832629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.832759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.832786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.832802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.832814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.832845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.842673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.842787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.842813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.842829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.842842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.842872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.852676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.852790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.852815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.852830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.852842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.852873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.862767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.862858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.862885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.862900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.862913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.862942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.872769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.872871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.872897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.872913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.872926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.872956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.882772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.882889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.882916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.882932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.882944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.882974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.892826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.892962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.892993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.893010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.893023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.893068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.902899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.902996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.903036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.903051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.903064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.903093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.912873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.912979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.913006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.913021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.913034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.913080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.922877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.922971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.922997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.923013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.923041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.923072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.932987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.933100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.933126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.933141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.933154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.933189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.943026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.752 [2024-07-15 13:04:28.943155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.752 [2024-07-15 13:04:28.943180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.752 [2024-07-15 13:04:28.943195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.752 [2024-07-15 13:04:28.943207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.752 [2024-07-15 13:04:28.943237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.752 qpair failed and we were unable to recover it. 00:25:10.752 [2024-07-15 13:04:28.953040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:10.753 [2024-07-15 13:04:28.953149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:10.753 [2024-07-15 13:04:28.953175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:10.753 [2024-07-15 13:04:28.953190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:10.753 [2024-07-15 13:04:28.953203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:10.753 [2024-07-15 13:04:28.953233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:10.753 qpair failed and we were unable to recover it. 00:25:11.013 [2024-07-15 13:04:28.963065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.013 [2024-07-15 13:04:28.963165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.013 [2024-07-15 13:04:28.963191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.013 [2024-07-15 13:04:28.963207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.013 [2024-07-15 13:04:28.963219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:28.963248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:28.973067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:28.973162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:28.973188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:28.973203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:28.973215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:28.973245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:28.983047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:28.983156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:28.983186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:28.983202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:28.983214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:28.983244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:28.993160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:28.993257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:28.993282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:28.993296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:28.993308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:28.993337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.003109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.003213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.003239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.003254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.003266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.003296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.013165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.013262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.013287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.013302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.013314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.013343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.023200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.023290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.023315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.023329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.023347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.023378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.033225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.033326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.033352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.033367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.033379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.033408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.043314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.043443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.043468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.043483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.043496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.043524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.053342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.053447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.053473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.053488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.053500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.053529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.063306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.063400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.063426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.063441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.063453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.063482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.073378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.073482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.073508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.073524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.073536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.073566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.083414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.083548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.083573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.083587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.083600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.083628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.093414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.093520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.093545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.093560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.014 [2024-07-15 13:04:29.093573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.014 [2024-07-15 13:04:29.093602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.014 qpair failed and we were unable to recover it. 00:25:11.014 [2024-07-15 13:04:29.103469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.014 [2024-07-15 13:04:29.103558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.014 [2024-07-15 13:04:29.103584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.014 [2024-07-15 13:04:29.103599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.103612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.103640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.113538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.113638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.113663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.113678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.113696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.113752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.123495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.123591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.123617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.123632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.123644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.123673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.133509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.133619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.133645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.133660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.133674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.133704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.143517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.143607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.143631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.143646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.143658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.143687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.153582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.153684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.153707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.153746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.153762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:11.015 [2024-07-15 13:04:29.153793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.163606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.163759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.163792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.163808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.163820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc8000b90 00:25:11.015 [2024-07-15 13:04:29.163853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.173682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.173812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.173845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.173862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.173875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.015 [2024-07-15 13:04:29.173911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.183665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.183777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.183805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.183820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.183833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.015 [2024-07-15 13:04:29.183862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.193704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.193830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.193854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.193869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.193882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.015 [2024-07-15 13:04:29.193911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.203793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.203899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.203924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.203945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.203958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.015 [2024-07-15 13:04:29.203987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.015 [2024-07-15 13:04:29.213773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.015 [2024-07-15 13:04:29.213872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.015 [2024-07-15 13:04:29.213897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.015 [2024-07-15 13:04:29.213911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.015 [2024-07-15 13:04:29.213923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.015 [2024-07-15 13:04:29.213952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.015 qpair failed and we were unable to recover it. 00:25:11.275 [2024-07-15 13:04:29.223768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.275 [2024-07-15 13:04:29.223900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.275 [2024-07-15 13:04:29.223928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.275 [2024-07-15 13:04:29.223944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.275 [2024-07-15 13:04:29.223956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.275 [2024-07-15 13:04:29.223991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.275 qpair failed and we were unable to recover it. 00:25:11.275 [2024-07-15 13:04:29.233836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.275 [2024-07-15 13:04:29.233942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.275 [2024-07-15 13:04:29.233968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.275 [2024-07-15 13:04:29.233992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.234004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.234033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.243844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.243940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.243968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.243984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.243997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.244031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.253881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.254034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.254058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.254073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.254084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.254112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.263924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.264084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.264111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.264126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.264139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.264168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.273921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.274023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.274049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.274064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.274092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.274121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.283952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.284061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.284085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.284099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.284112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.284141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.293998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.294107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.294131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.294151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.294163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.294192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.304073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.304186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.304212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.304227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.304251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.304278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.314082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.314180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.314204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.314218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.314231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.314260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.324073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.324176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.324201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.324216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.324228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.324257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.334122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.334216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.334241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.334255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.334268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.334296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.344160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.344250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.344273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.344288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.344300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.344328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.354193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.354293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.354317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.354331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.354343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.354371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.364230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.364327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.364350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.364365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.364377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.364405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.276 [2024-07-15 13:04:29.374259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.276 [2024-07-15 13:04:29.374348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.276 [2024-07-15 13:04:29.374371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.276 [2024-07-15 13:04:29.374386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.276 [2024-07-15 13:04:29.374399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.276 [2024-07-15 13:04:29.374427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.276 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.384283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.384415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.384444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.384460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.384472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.384500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.394321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.394419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.394442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.394457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.394469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.394497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.404331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.404431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.404455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.404470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.404482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.404510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.414352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.414476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.414499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.414514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.414526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.414554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.424411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.424526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.424549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.424563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.424576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.424610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.434419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.434515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.434539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.434553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.434566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.434593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.444405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.444513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.444537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.444552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.444564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.444592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.454460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.454555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.454579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.454594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.454606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.454634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.464518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.464669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.464693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.464708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.464735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.464775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.277 [2024-07-15 13:04:29.474513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.277 [2024-07-15 13:04:29.474614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.277 [2024-07-15 13:04:29.474648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.277 [2024-07-15 13:04:29.474663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.277 [2024-07-15 13:04:29.474677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.277 [2024-07-15 13:04:29.474704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.277 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.484526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.484653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.484692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.484707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.484720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.484757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.494602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.494734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.494767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.494783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.494796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.494826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.504583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.504704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.504753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.504770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.504783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.504813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.514595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.514714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.514760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.514775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.514788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.514823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.524591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.524742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.524768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.524784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.524797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.524827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.534705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.534828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.534854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.534868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.534881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.534910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.544690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.544812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.544837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.544852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.544865] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.544894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.554787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.554890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.554915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.554930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.554943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.554972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.564763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.564887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.564917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.564934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.564947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.564975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.574792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.574887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.574912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.574928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.574941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.574971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.584813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.584913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.584939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.584954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.584966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.584995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.594862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.594966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.594990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.595005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.595017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.595046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.604845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.604950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.604975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.604990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.605003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.537 [2024-07-15 13:04:29.605051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.537 qpair failed and we were unable to recover it. 00:25:11.537 [2024-07-15 13:04:29.614892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.537 [2024-07-15 13:04:29.615026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.537 [2024-07-15 13:04:29.615066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.537 [2024-07-15 13:04:29.615081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.537 [2024-07-15 13:04:29.615093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.615121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.624980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.625118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.625142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.625157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.625169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.625196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.635082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.635178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.635201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.635215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.635228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.635256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.645088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.645204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.645228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.645243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.645255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.645284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.655065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.655198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.655227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.655242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.655254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.655284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.665009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.665120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.665144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.665158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.665171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.665198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.675058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.675173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.675199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.675214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.675227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.675263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.685027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.685138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.685162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.685176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.685188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.685228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.695096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.695186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.695210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.695224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.695242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.695270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.705144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.705233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.705257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.705272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.705284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.705312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.715161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.715270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.715293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.715307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.715320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.715348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.725277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.725373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.725397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.725411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.725424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.725452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.538 [2024-07-15 13:04:29.735234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.538 [2024-07-15 13:04:29.735365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.538 [2024-07-15 13:04:29.735390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.538 [2024-07-15 13:04:29.735404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.538 [2024-07-15 13:04:29.735417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.538 [2024-07-15 13:04:29.735446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.538 qpair failed and we were unable to recover it. 00:25:11.798 [2024-07-15 13:04:29.745280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.798 [2024-07-15 13:04:29.745445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.798 [2024-07-15 13:04:29.745474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.798 [2024-07-15 13:04:29.745490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.798 [2024-07-15 13:04:29.745503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.798 [2024-07-15 13:04:29.745531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.798 qpair failed and we were unable to recover it. 00:25:11.798 [2024-07-15 13:04:29.755259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.798 [2024-07-15 13:04:29.755359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.798 [2024-07-15 13:04:29.755384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.798 [2024-07-15 13:04:29.755399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.798 [2024-07-15 13:04:29.755411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.798 [2024-07-15 13:04:29.755440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.798 qpair failed and we were unable to recover it. 00:25:11.798 [2024-07-15 13:04:29.765297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.798 [2024-07-15 13:04:29.765442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.798 [2024-07-15 13:04:29.765469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.798 [2024-07-15 13:04:29.765484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.798 [2024-07-15 13:04:29.765496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.765524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.775303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.775406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.775431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.775446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.775458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.775486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.785335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.785426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.785450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.785465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.785482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.785510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.795350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.795455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.795479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.795493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.795505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.795533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.805398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.805501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.805525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.805539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.805552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.805587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.815475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.815572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.815598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.815613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.815625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.815652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.825491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.825611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.825637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.825652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.825664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.825692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.835585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.835708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.835760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.835777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.835791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.835820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.845537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.845646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.845670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.845684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.845697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.845761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.855585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.855679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.855704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.855732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.855754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.855784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.865593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.865712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.865762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.865781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.865794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.865823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.875630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.875760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.875792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.875812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.875826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.875855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.885655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.885781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.885806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.885820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.885833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.885866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.895691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.895865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.895893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.895908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.895921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.895949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.905672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.799 [2024-07-15 13:04:29.905785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.799 [2024-07-15 13:04:29.905810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.799 [2024-07-15 13:04:29.905826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.799 [2024-07-15 13:04:29.905839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.799 [2024-07-15 13:04:29.905867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.799 qpair failed and we were unable to recover it. 00:25:11.799 [2024-07-15 13:04:29.915762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.915866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.915891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.915906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.915919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.915948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.925789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.925894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.925918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.925933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.925946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.925979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.935792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.935918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.935944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.935959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.935972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.936000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.945817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.945917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.945941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.945956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.945969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.945998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.955853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.955952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.955976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.955991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.956003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.956031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.965892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.965993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.966032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.966052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.966065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.966093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.975910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.976013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.976052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.976067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.976079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.976107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.985962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.986073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.986098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.986128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.986142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.986170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:11.800 [2024-07-15 13:04:29.995989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:11.800 [2024-07-15 13:04:29.996112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:11.800 [2024-07-15 13:04:29.996136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:11.800 [2024-07-15 13:04:29.996151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:11.800 [2024-07-15 13:04:29.996163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:11.800 [2024-07-15 13:04:29.996190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:11.800 qpair failed and we were unable to recover it. 00:25:12.059 [2024-07-15 13:04:30.006225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.059 [2024-07-15 13:04:30.006428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.059 [2024-07-15 13:04:30.006473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.059 [2024-07-15 13:04:30.006525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.059 [2024-07-15 13:04:30.006556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.059 [2024-07-15 13:04:30.006610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.059 qpair failed and we were unable to recover it. 00:25:12.059 [2024-07-15 13:04:30.016052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.059 [2024-07-15 13:04:30.016161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.059 [2024-07-15 13:04:30.016187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.059 [2024-07-15 13:04:30.016202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.059 [2024-07-15 13:04:30.016215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.059 [2024-07-15 13:04:30.016244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.059 qpair failed and we were unable to recover it. 00:25:12.059 [2024-07-15 13:04:30.026140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.059 [2024-07-15 13:04:30.026242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.059 [2024-07-15 13:04:30.026270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.059 [2024-07-15 13:04:30.026286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.059 [2024-07-15 13:04:30.026299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.059 [2024-07-15 13:04:30.026329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.036156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.036271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.036297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.036312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.036324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.060 [2024-07-15 13:04:30.036364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.046106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.046214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.046240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.046254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.046268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.060 [2024-07-15 13:04:30.046296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.056194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.056303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.056328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.056352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.056365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.060 [2024-07-15 13:04:30.056393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.066186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.066289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.066316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.066332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.066345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1baeea0 00:25:12.060 [2024-07-15 13:04:30.066375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.076160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.076262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.076296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.076313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.076326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:12.060 [2024-07-15 13:04:30.076358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.086185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.086285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.086311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.086326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.086339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd0000b90 00:25:12.060 [2024-07-15 13:04:30.086369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.096282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.096378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.096410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.096428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.096442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd8000b90 00:25:12.060 [2024-07-15 13:04:30.096474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.106266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.106359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.106384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.106399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.106412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dd8000b90 00:25:12.060 [2024-07-15 13:04:30.106442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.116356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.116458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.116495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.116510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.116523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc8000b90 00:25:12.060 [2024-07-15 13:04:30.116567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.126374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:12.060 [2024-07-15 13:04:30.126478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:12.060 [2024-07-15 13:04:30.126510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:12.060 [2024-07-15 13:04:30.126527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:12.060 [2024-07-15 13:04:30.126540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7dc8000b90 00:25:12.060 [2024-07-15 13:04:30.126570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:12.060 qpair failed and we were unable to recover it. 00:25:12.060 [2024-07-15 13:04:30.126683] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:12.060 A controller has encountered a failure and is being reset. 00:25:12.060 Controller properly reset. 00:25:12.060 Initializing NVMe Controllers 00:25:12.060 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:12.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:12.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:12.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:12.060 Initialization complete. Launching workers. 00:25:12.060 Starting thread on core 1 00:25:12.060 Starting thread on core 2 00:25:12.060 Starting thread on core 3 00:25:12.060 Starting thread on core 0 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:12.060 00:25:12.060 real 0m10.917s 00:25:12.060 user 0m18.597s 00:25:12.060 sys 0m5.536s 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:12.060 ************************************ 00:25:12.060 END TEST nvmf_target_disconnect_tc2 00:25:12.060 ************************************ 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.060 rmmod nvme_tcp 00:25:12.060 rmmod nvme_fabrics 00:25:12.060 rmmod nvme_keyring 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3499102 ']' 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3499102 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3499102 ']' 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3499102 00:25:12.060 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:12.061 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.061 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3499102 00:25:12.318 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:12.318 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:12.318 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3499102' 00:25:12.318 killing process with pid 3499102 00:25:12.318 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3499102 00:25:12.318 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3499102 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.576 13:04:30 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.476 13:04:32 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.476 00:25:14.476 real 0m15.706s 00:25:14.476 user 0m45.158s 00:25:14.476 sys 0m7.408s 00:25:14.476 13:04:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.476 13:04:32 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 ************************************ 00:25:14.476 END TEST nvmf_target_disconnect 00:25:14.476 ************************************ 00:25:14.476 13:04:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:14.476 13:04:32 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:25:14.476 13:04:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.476 13:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 13:04:32 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:25:14.476 00:25:14.476 real 19m22.064s 00:25:14.476 user 45m36.217s 00:25:14.476 sys 5m5.389s 00:25:14.476 13:04:32 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.476 13:04:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.476 ************************************ 00:25:14.476 END TEST nvmf_tcp 00:25:14.476 ************************************ 00:25:14.476 13:04:32 -- common/autotest_common.sh@1142 -- # return 0 00:25:14.476 13:04:32 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:14.476 13:04:32 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:14.476 13:04:32 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:14.476 13:04:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.476 13:04:32 -- common/autotest_common.sh@10 -- # set +x 00:25:14.735 ************************************ 00:25:14.735 START TEST spdkcli_nvmf_tcp 00:25:14.735 ************************************ 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:14.735 * Looking for test storage... 00:25:14.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.735 13:04:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3500303 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3500303 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3500303 ']' 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.736 13:04:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.736 [2024-07-15 13:04:32.796842] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:25:14.736 [2024-07-15 13:04:32.796935] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500303 ] 00:25:14.736 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.736 [2024-07-15 13:04:32.855060] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:14.994 [2024-07-15 13:04:32.966840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.994 [2024-07-15 13:04:32.966844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:14.994 13:04:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:14.994 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:14.994 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:14.994 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:14.994 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:14.994 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:14.994 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:14.994 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:14.994 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:14.994 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:14.994 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:14.995 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:14.995 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:14.995 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:14.995 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:14.995 ' 00:25:17.527 [2024-07-15 13:04:35.654438] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.906 [2024-07-15 13:04:36.878676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:21.442 [2024-07-15 13:04:39.133608] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:23.347 [2024-07-15 13:04:41.083891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:24.728 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:24.728 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:24.728 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:24.728 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:24.728 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:24.728 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:24.728 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:24.728 13:04:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.986 13:04:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:24.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:24.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:24.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:24.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:24.986 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:24.986 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:24.986 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:24.986 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:24.986 ' 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:30.255 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:30.255 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:30.255 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:30.255 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3500303 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3500303 ']' 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3500303 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3500303 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3500303' 00:25:30.255 killing process with pid 3500303 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3500303 00:25:30.255 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3500303 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3500303 ']' 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3500303 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3500303 ']' 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3500303 00:25:30.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3500303) - No such process 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3500303 is not found' 00:25:30.515 Process with pid 3500303 is not found 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:30.515 00:25:30.515 real 0m16.015s 00:25:30.515 user 0m33.881s 00:25:30.515 sys 0m0.730s 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:30.515 13:04:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.515 ************************************ 00:25:30.515 END TEST spdkcli_nvmf_tcp 00:25:30.515 ************************************ 00:25:30.774 13:04:48 -- common/autotest_common.sh@1142 -- # return 0 00:25:30.774 13:04:48 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:30.774 13:04:48 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:30.774 13:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.774 13:04:48 -- common/autotest_common.sh@10 -- # set +x 00:25:30.774 ************************************ 00:25:30.774 START TEST nvmf_identify_passthru 00:25:30.774 ************************************ 00:25:30.774 13:04:48 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:30.774 * Looking for test storage... 00:25:30.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:30.774 13:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.774 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.774 13:04:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.774 13:04:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.774 13:04:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.774 13:04:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:30.775 13:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.775 13:04:48 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.775 13:04:48 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.775 13:04:48 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:30.775 13:04:48 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.775 13:04:48 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.775 13:04:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:30.775 13:04:48 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:30.775 13:04:48 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:30.775 13:04:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:33.309 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:33.309 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:33.309 Found net devices under 0000:84:00.0: cvl_0_0 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:33.309 Found net devices under 0000:84:00.1: cvl_0_1 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.309 13:04:50 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.309 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:25:33.310 00:25:33.310 --- 10.0.0.2 ping statistics --- 00:25:33.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.310 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:25:33.310 00:25:33.310 --- 10.0.0.1 ping statistics --- 00:25:33.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.310 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.310 13:04:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:25:33.310 13:04:51 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:33.310 13:04:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:33.310 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.582 13:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:25:37.582 13:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:25:37.582 13:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:37.582 13:04:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:37.582 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3504947 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3504947 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3504947 ']' 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.772 [2024-07-15 13:04:59.705216] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:25:41.772 [2024-07-15 13:04:59.705312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.772 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.772 [2024-07-15 13:04:59.768958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.772 [2024-07-15 13:04:59.876931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.772 [2024-07-15 13:04:59.876990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.772 [2024-07-15 13:04:59.877020] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:41.772 [2024-07-15 13:04:59.877032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:41.772 [2024-07-15 13:04:59.877042] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.772 [2024-07-15 13:04:59.877169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.772 [2024-07-15 13:04:59.877235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.772 [2024-07-15 13:04:59.877302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.772 [2024-07-15 13:04:59.877304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.772 INFO: Log level set to 20 00:25:41.772 INFO: Requests: 00:25:41.772 { 00:25:41.772 "jsonrpc": "2.0", 00:25:41.772 "method": "nvmf_set_config", 00:25:41.772 "id": 1, 00:25:41.772 "params": { 00:25:41.772 "admin_cmd_passthru": { 00:25:41.772 "identify_ctrlr": true 00:25:41.772 } 00:25:41.772 } 00:25:41.772 } 00:25:41.772 00:25:41.772 INFO: response: 00:25:41.772 { 00:25:41.772 "jsonrpc": "2.0", 00:25:41.772 "id": 1, 00:25:41.772 "result": true 00:25:41.772 } 00:25:41.772 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.772 13:04:59 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.772 13:04:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.772 INFO: Setting log level to 20 00:25:41.772 INFO: Setting log level to 20 00:25:41.772 INFO: Log level set to 20 00:25:41.772 INFO: Log level set to 20 00:25:41.772 INFO: Requests: 00:25:41.772 { 00:25:41.772 "jsonrpc": "2.0", 00:25:41.772 "method": "framework_start_init", 00:25:41.772 "id": 1 00:25:41.772 } 00:25:41.772 00:25:41.772 INFO: Requests: 00:25:41.772 { 00:25:41.772 "jsonrpc": "2.0", 00:25:41.772 "method": "framework_start_init", 00:25:41.772 "id": 1 00:25:41.772 } 00:25:41.772 00:25:42.032 [2024-07-15 13:05:00.030973] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:42.032 INFO: response: 00:25:42.032 { 00:25:42.032 "jsonrpc": "2.0", 00:25:42.032 "id": 1, 00:25:42.032 "result": true 00:25:42.032 } 00:25:42.032 00:25:42.032 INFO: response: 00:25:42.032 { 00:25:42.032 "jsonrpc": "2.0", 00:25:42.032 "id": 1, 00:25:42.032 "result": true 00:25:42.032 } 00:25:42.032 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.032 13:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:42.032 INFO: Setting log level to 40 00:25:42.032 INFO: Setting log level to 40 00:25:42.032 INFO: Setting log level to 40 00:25:42.032 [2024-07-15 13:05:00.040971] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.032 13:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:42.032 13:05:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.032 13:05:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 Nvme0n1 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 [2024-07-15 13:05:02.930229] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 [ 00:25:45.324 { 00:25:45.324 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.324 "subtype": "Discovery", 00:25:45.324 "listen_addresses": [], 00:25:45.324 "allow_any_host": true, 00:25:45.324 "hosts": [] 00:25:45.324 }, 00:25:45.324 { 00:25:45.324 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.324 "subtype": "NVMe", 00:25:45.324 "listen_addresses": [ 00:25:45.324 { 00:25:45.324 "trtype": "TCP", 00:25:45.324 "adrfam": "IPv4", 00:25:45.324 "traddr": "10.0.0.2", 00:25:45.324 "trsvcid": "4420" 00:25:45.324 } 00:25:45.324 ], 00:25:45.324 "allow_any_host": true, 00:25:45.324 "hosts": [], 00:25:45.324 "serial_number": "SPDK00000000000001", 00:25:45.324 "model_number": "SPDK bdev Controller", 00:25:45.324 "max_namespaces": 1, 00:25:45.324 "min_cntlid": 1, 00:25:45.324 "max_cntlid": 65519, 00:25:45.324 "namespaces": [ 00:25:45.324 { 00:25:45.324 "nsid": 1, 00:25:45.324 "bdev_name": "Nvme0n1", 00:25:45.324 "name": "Nvme0n1", 00:25:45.324 "nguid": "BBB8E901134D49A08232F39359B699BF", 00:25:45.324 "uuid": "bbb8e901-134d-49a0-8232-f39359b699bf" 00:25:45.324 } 00:25:45.324 ] 00:25:45.324 } 00:25:45.324 ] 00:25:45.324 13:05:02 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:45.324 13:05:02 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:45.324 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:45.324 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:45.324 13:05:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:45.324 rmmod nvme_tcp 00:25:45.324 rmmod nvme_fabrics 00:25:45.324 rmmod nvme_keyring 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3504947 ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3504947 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3504947 ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3504947 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3504947 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:45.324 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3504947' 00:25:45.324 killing process with pid 3504947 00:25:45.325 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3504947 00:25:45.325 13:05:03 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3504947 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:47.231 13:05:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.231 13:05:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:47.231 13:05:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.140 13:05:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:49.140 00:25:49.140 real 0m18.318s 00:25:49.140 user 0m27.008s 00:25:49.140 sys 0m2.415s 00:25:49.140 13:05:07 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:49.140 13:05:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 ************************************ 00:25:49.140 END TEST nvmf_identify_passthru 00:25:49.140 ************************************ 00:25:49.140 13:05:07 -- common/autotest_common.sh@1142 -- # return 0 00:25:49.140 13:05:07 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:49.140 13:05:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:49.140 13:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.140 13:05:07 -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 ************************************ 00:25:49.140 START TEST nvmf_dif 00:25:49.140 ************************************ 00:25:49.140 13:05:07 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:49.140 * Looking for test storage... 00:25:49.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:49.140 13:05:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:49.140 13:05:07 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:49.140 13:05:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:49.140 13:05:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:49.140 13:05:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:49.140 13:05:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.140 13:05:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.140 13:05:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.140 13:05:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:49.141 13:05:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:49.141 13:05:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:49.141 13:05:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:49.141 13:05:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:49.141 13:05:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:49.141 13:05:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.141 13:05:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:49.141 13:05:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:49.141 13:05:07 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:49.141 13:05:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:51.048 13:05:09 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:51.049 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:51.049 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:51.049 Found net devices under 0000:84:00.0: cvl_0_0 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:51.049 Found net devices under 0000:84:00.1: cvl_0_1 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:51.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:51.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:25:51.049 00:25:51.049 --- 10.0.0.2 ping statistics --- 00:25:51.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.049 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:51.049 13:05:09 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:51.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:51.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:25:51.049 00:25:51.049 --- 10.0.0.1 ping statistics --- 00:25:51.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:51.050 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:25:51.050 13:05:09 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:51.050 13:05:09 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:51.050 13:05:09 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:51.050 13:05:09 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:52.428 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:52.428 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:52.428 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:52.428 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:52.428 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:52.428 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:52.428 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:52.428 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:52.428 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:52.428 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:52.428 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:52.428 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:52.428 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:52.428 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:52.428 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:52.428 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:52.428 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:52.428 13:05:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:52.428 13:05:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3508115 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:52.428 13:05:10 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3508115 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3508115 ']' 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.428 13:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:52.687 [2024-07-15 13:05:10.652628] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:25:52.687 [2024-07-15 13:05:10.652696] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.687 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.687 [2024-07-15 13:05:10.716510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.687 [2024-07-15 13:05:10.824908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.687 [2024-07-15 13:05:10.824960] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.687 [2024-07-15 13:05:10.824976] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.687 [2024-07-15 13:05:10.824988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.687 [2024-07-15 13:05:10.824999] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.687 [2024-07-15 13:05:10.825056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:52.944 13:05:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 13:05:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.944 13:05:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:52.944 13:05:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 [2024-07-15 13:05:10.968001] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 13:05:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.944 13:05:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 ************************************ 00:25:52.944 START TEST fio_dif_1_default 00:25:52.944 ************************************ 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 bdev_null0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:52.944 [2024-07-15 13:05:11.032317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:52.944 { 00:25:52.944 "params": { 00:25:52.944 "name": "Nvme$subsystem", 00:25:52.944 "trtype": "$TEST_TRANSPORT", 00:25:52.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.944 "adrfam": "ipv4", 00:25:52.944 "trsvcid": "$NVMF_PORT", 00:25:52.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.944 "hdgst": ${hdgst:-false}, 00:25:52.944 "ddgst": ${ddgst:-false} 00:25:52.944 }, 00:25:52.944 "method": "bdev_nvme_attach_controller" 00:25:52.944 } 00:25:52.944 EOF 00:25:52.944 )") 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:52.944 "params": { 00:25:52.944 "name": "Nvme0", 00:25:52.944 "trtype": "tcp", 00:25:52.944 "traddr": "10.0.0.2", 00:25:52.944 "adrfam": "ipv4", 00:25:52.944 "trsvcid": "4420", 00:25:52.944 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:52.944 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:52.944 "hdgst": false, 00:25:52.944 "ddgst": false 00:25:52.944 }, 00:25:52.944 "method": "bdev_nvme_attach_controller" 00:25:52.944 }' 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:52.944 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:52.945 13:05:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:53.201 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:53.201 fio-3.35 00:25:53.201 Starting 1 thread 00:25:53.201 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.394 00:26:05.394 filename0: (groupid=0, jobs=1): err= 0: pid=3508347: Mon Jul 15 13:05:22 2024 00:26:05.394 read: IOPS=181, BW=727KiB/s (745kB/s)(7296KiB/10031msec) 00:26:05.394 slat (nsec): min=4930, max=36588, avg=9198.39, stdev=2590.48 00:26:05.394 clat (usec): min=514, max=47177, avg=21967.44, stdev=20431.81 00:26:05.394 lat (usec): min=522, max=47196, avg=21976.64, stdev=20431.73 00:26:05.394 clat percentiles (usec): 00:26:05.394 | 1.00th=[ 553], 5.00th=[ 570], 10.00th=[ 603], 20.00th=[ 627], 00:26:05.394 | 30.00th=[ 635], 40.00th=[ 660], 50.00th=[41157], 60.00th=[41157], 00:26:05.394 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:26:05.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:26:05.394 | 99.99th=[46924] 00:26:05.394 bw ( KiB/s): min= 512, max= 768, per=100.00%, avg=728.00, stdev=78.30, samples=20 00:26:05.394 iops : min= 128, max= 192, avg=182.00, stdev=19.57, samples=20 00:26:05.394 lat (usec) : 750=47.75%, 1000=0.05% 00:26:05.394 lat (msec) : 50=52.19% 00:26:05.395 cpu : usr=89.48%, sys=10.23%, ctx=12, majf=0, minf=219 00:26:05.395 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:05.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:05.395 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:05.395 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:05.395 00:26:05.395 Run status group 0 (all jobs): 00:26:05.395 READ: bw=727KiB/s (745kB/s), 727KiB/s-727KiB/s (745kB/s-745kB/s), io=7296KiB (7471kB), run=10031-10031msec 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 00:26:05.395 real 0m11.348s 00:26:05.395 user 0m10.243s 00:26:05.395 sys 0m1.306s 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 ************************************ 00:26:05.395 END TEST fio_dif_1_default 00:26:05.395 ************************************ 00:26:05.395 13:05:22 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:05.395 13:05:22 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:05.395 13:05:22 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:05.395 13:05:22 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 ************************************ 00:26:05.395 START TEST fio_dif_1_multi_subsystems 00:26:05.395 ************************************ 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 bdev_null0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 [2024-07-15 13:05:22.435618] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 bdev_null1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.395 { 00:26:05.395 "params": { 00:26:05.395 "name": "Nvme$subsystem", 00:26:05.395 "trtype": "$TEST_TRANSPORT", 00:26:05.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.395 "adrfam": "ipv4", 00:26:05.395 "trsvcid": "$NVMF_PORT", 00:26:05.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.395 "hdgst": ${hdgst:-false}, 00:26:05.395 "ddgst": ${ddgst:-false} 00:26:05.395 }, 00:26:05.395 "method": "bdev_nvme_attach_controller" 00:26:05.395 } 00:26:05.395 EOF 00:26:05.395 )") 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:05.395 { 00:26:05.395 "params": { 00:26:05.395 "name": "Nvme$subsystem", 00:26:05.395 "trtype": "$TEST_TRANSPORT", 00:26:05.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:05.395 "adrfam": "ipv4", 00:26:05.395 "trsvcid": "$NVMF_PORT", 00:26:05.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:05.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:05.395 "hdgst": ${hdgst:-false}, 00:26:05.395 "ddgst": ${ddgst:-false} 00:26:05.395 }, 00:26:05.395 "method": "bdev_nvme_attach_controller" 00:26:05.395 } 00:26:05.395 EOF 00:26:05.395 )") 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:05.395 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:05.396 "params": { 00:26:05.396 "name": "Nvme0", 00:26:05.396 "trtype": "tcp", 00:26:05.396 "traddr": "10.0.0.2", 00:26:05.396 "adrfam": "ipv4", 00:26:05.396 "trsvcid": "4420", 00:26:05.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:05.396 "hdgst": false, 00:26:05.396 "ddgst": false 00:26:05.396 }, 00:26:05.396 "method": "bdev_nvme_attach_controller" 00:26:05.396 },{ 00:26:05.396 "params": { 00:26:05.396 "name": "Nvme1", 00:26:05.396 "trtype": "tcp", 00:26:05.396 "traddr": "10.0.0.2", 00:26:05.396 "adrfam": "ipv4", 00:26:05.396 "trsvcid": "4420", 00:26:05.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:05.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:05.396 "hdgst": false, 00:26:05.396 "ddgst": false 00:26:05.396 }, 00:26:05.396 "method": "bdev_nvme_attach_controller" 00:26:05.396 }' 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:05.396 13:05:22 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:05.396 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:05.396 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:05.396 fio-3.35 00:26:05.396 Starting 2 threads 00:26:05.396 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.589 00:26:17.589 filename0: (groupid=0, jobs=1): err= 0: pid=3509865: Mon Jul 15 13:05:33 2024 00:26:17.589 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10011msec) 00:26:17.589 slat (nsec): min=7789, max=90443, avg=10281.43, stdev=4379.07 00:26:17.589 clat (usec): min=868, max=47192, avg=41676.98, stdev=2698.30 00:26:17.589 lat (usec): min=876, max=47241, avg=41687.26, stdev=2698.44 00:26:17.589 clat percentiles (usec): 00:26:17.589 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:26:17.589 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:17.589 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:17.589 | 99.00th=[43254], 99.50th=[43254], 99.90th=[47449], 99.95th=[47449], 00:26:17.589 | 99.99th=[47449] 00:26:17.589 bw ( KiB/s): min= 352, max= 416, per=49.90%, avg=382.40, stdev=12.61, samples=20 00:26:17.589 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:26:17.589 lat (usec) : 1000=0.42% 00:26:17.589 lat (msec) : 50=99.58% 00:26:17.589 cpu : usr=93.66%, sys=6.05%, ctx=16, majf=0, minf=119 00:26:17.589 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.589 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.589 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:17.589 filename1: (groupid=0, jobs=1): err= 0: pid=3509866: Mon Jul 15 13:05:33 2024 00:26:17.589 read: IOPS=95, BW=382KiB/s (391kB/s)(3824KiB/10011msec) 00:26:17.589 slat (nsec): min=7554, max=82269, avg=11136.98, stdev=4612.44 00:26:17.589 clat (usec): min=40916, max=47135, avg=41848.23, stdev=482.75 00:26:17.589 lat (usec): min=40924, max=47148, avg=41859.37, stdev=482.86 00:26:17.589 clat percentiles (usec): 00:26:17.589 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:26:17.589 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:26:17.589 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:17.589 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:26:17.589 | 99.99th=[46924] 00:26:17.589 bw ( KiB/s): min= 352, max= 384, per=49.64%, avg=380.80, stdev= 9.85, samples=20 00:26:17.589 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:26:17.589 lat (msec) : 50=100.00% 00:26:17.589 cpu : usr=94.05%, sys=5.62%, ctx=22, majf=0, minf=185 00:26:17.589 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.589 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.589 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:17.589 00:26:17.589 Run status group 0 (all jobs): 00:26:17.589 READ: bw=766KiB/s (784kB/s), 382KiB/s-384KiB/s (391kB/s-393kB/s), io=7664KiB (7848kB), run=10011-10011msec 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:17.589 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 00:26:17.590 real 0m11.523s 00:26:17.590 user 0m20.248s 00:26:17.590 sys 0m1.445s 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 ************************************ 00:26:17.590 END TEST fio_dif_1_multi_subsystems 00:26:17.590 ************************************ 00:26:17.590 13:05:33 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:17.590 13:05:33 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:17.590 13:05:33 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:17.590 13:05:33 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 ************************************ 00:26:17.590 START TEST fio_dif_rand_params 00:26:17.590 ************************************ 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 bdev_null0 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.590 [2024-07-15 13:05:33.995797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:17.590 13:05:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.590 { 00:26:17.590 "params": { 00:26:17.590 "name": "Nvme$subsystem", 00:26:17.590 "trtype": "$TEST_TRANSPORT", 00:26:17.590 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.590 "adrfam": "ipv4", 00:26:17.590 "trsvcid": "$NVMF_PORT", 00:26:17.590 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.590 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.590 "hdgst": ${hdgst:-false}, 00:26:17.590 "ddgst": ${ddgst:-false} 00:26:17.590 }, 00:26:17.590 "method": "bdev_nvme_attach_controller" 00:26:17.590 } 00:26:17.590 EOF 00:26:17.590 )") 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.590 "params": { 00:26:17.590 "name": "Nvme0", 00:26:17.590 "trtype": "tcp", 00:26:17.590 "traddr": "10.0.0.2", 00:26:17.590 "adrfam": "ipv4", 00:26:17.590 "trsvcid": "4420", 00:26:17.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.590 "hdgst": false, 00:26:17.590 "ddgst": false 00:26:17.590 }, 00:26:17.590 "method": "bdev_nvme_attach_controller" 00:26:17.590 }' 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.590 13:05:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.591 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:17.591 ... 00:26:17.591 fio-3.35 00:26:17.591 Starting 3 threads 00:26:17.591 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.820 00:26:21.820 filename0: (groupid=0, jobs=1): err= 0: pid=3511270: Mon Jul 15 13:05:39 2024 00:26:21.820 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(169MiB/5005msec) 00:26:21.820 slat (usec): min=8, max=116, avg=16.84, stdev= 6.11 00:26:21.820 clat (usec): min=3798, max=55668, avg=11059.12, stdev=4074.90 00:26:21.820 lat (usec): min=3811, max=55684, avg=11075.96, stdev=4075.02 00:26:21.820 clat percentiles (usec): 00:26:21.820 | 1.00th=[ 4686], 5.00th=[ 6980], 10.00th=[ 7701], 20.00th=[ 8848], 00:26:21.820 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:26:21.820 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13566], 95.00th=[14222], 00:26:21.820 | 99.00th=[15926], 99.50th=[51643], 99.90th=[55313], 99.95th=[55837], 00:26:21.820 | 99.99th=[55837] 00:26:21.820 bw ( KiB/s): min=28672, max=39759, per=36.04%, avg=34619.10, stdev=3090.79, samples=10 00:26:21.820 iops : min= 224, max= 310, avg=270.40, stdev=24.03, samples=10 00:26:21.820 lat (msec) : 4=0.07%, 10=31.14%, 20=68.12%, 50=0.15%, 100=0.52% 00:26:21.820 cpu : usr=86.07%, sys=10.37%, ctx=126, majf=0, minf=92 00:26:21.820 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.820 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.820 filename0: (groupid=0, jobs=1): err= 0: pid=3511271: Mon Jul 15 13:05:39 2024 00:26:21.820 read: IOPS=251, BW=31.4MiB/s (32.9MB/s)(157MiB/5005msec) 00:26:21.820 slat (nsec): min=6570, max=41755, avg=14722.96, stdev=3272.58 00:26:21.820 clat (usec): min=4694, max=57855, avg=11925.17, stdev=3787.17 00:26:21.820 lat (usec): min=4708, max=57871, avg=11939.89, stdev=3787.31 00:26:21.820 clat percentiles (usec): 00:26:21.820 | 1.00th=[ 6521], 5.00th=[ 7570], 10.00th=[ 8029], 20.00th=[ 9503], 00:26:21.820 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:26:21.820 | 70.00th=[13173], 80.00th=[13829], 90.00th=[14615], 95.00th=[15139], 00:26:21.820 | 99.00th=[17171], 99.50th=[18744], 99.90th=[57934], 99.95th=[57934], 00:26:21.820 | 99.99th=[57934] 00:26:21.820 bw ( KiB/s): min=28416, max=37632, per=33.42%, avg=32102.40, stdev=2549.17, samples=10 00:26:21.820 iops : min= 222, max= 294, avg=250.80, stdev=19.92, samples=10 00:26:21.820 lat (msec) : 10=24.11%, 20=75.42%, 50=0.08%, 100=0.40% 00:26:21.820 cpu : usr=91.39%, sys=7.45%, ctx=318, majf=0, minf=90 00:26:21.820 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 issued rwts: total=1257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.820 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.820 filename0: (groupid=0, jobs=1): err= 0: pid=3511272: Mon Jul 15 13:05:39 2024 00:26:21.820 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(147MiB/5047msec) 00:26:21.820 slat (nsec): min=7354, max=53310, avg=14454.90, stdev=3504.50 00:26:21.820 clat (usec): min=7156, max=56334, avg=12828.86, stdev=8775.51 00:26:21.820 lat (usec): min=7169, max=56348, avg=12843.31, stdev=8775.28 00:26:21.820 clat percentiles (usec): 00:26:21.820 | 1.00th=[ 7963], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:26:21.820 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11207], 00:26:21.820 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13042], 95.00th=[15795], 00:26:21.820 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[56361], 00:26:21.820 | 99.99th=[56361] 00:26:21.820 bw ( KiB/s): min=21504, max=35840, per=31.26%, avg=30028.80, stdev=4447.25, samples=10 00:26:21.820 iops : min= 168, max= 280, avg=234.60, stdev=34.74, samples=10 00:26:21.820 lat (msec) : 10=20.43%, 20=74.81%, 50=1.02%, 100=3.74% 00:26:21.820 cpu : usr=92.19%, sys=7.33%, ctx=13, majf=0, minf=129 00:26:21.820 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.820 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.820 issued rwts: total=1175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.820 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:21.820 00:26:21.820 Run status group 0 (all jobs): 00:26:21.820 READ: bw=93.8MiB/s (98.3MB/s), 29.1MiB/s-33.8MiB/s (30.5MB/s-35.5MB/s), io=473MiB (496MB), run=5005-5047msec 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 bdev_null0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 [2024-07-15 13:05:40.266156] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 bdev_null1 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.109 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 bdev_null2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.374 { 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme$subsystem", 00:26:22.374 "trtype": "$TEST_TRANSPORT", 00:26:22.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "$NVMF_PORT", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.374 "hdgst": ${hdgst:-false}, 00:26:22.374 "ddgst": ${ddgst:-false} 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 } 00:26:22.374 EOF 00:26:22.374 )") 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.374 { 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme$subsystem", 00:26:22.374 "trtype": "$TEST_TRANSPORT", 00:26:22.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "$NVMF_PORT", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.374 "hdgst": ${hdgst:-false}, 00:26:22.374 "ddgst": ${ddgst:-false} 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 } 00:26:22.374 EOF 00:26:22.374 )") 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:22.374 { 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme$subsystem", 00:26:22.374 "trtype": "$TEST_TRANSPORT", 00:26:22.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "$NVMF_PORT", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:22.374 "hdgst": ${hdgst:-false}, 00:26:22.374 "ddgst": ${ddgst:-false} 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 } 00:26:22.374 EOF 00:26:22.374 )") 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme0", 00:26:22.374 "trtype": "tcp", 00:26:22.374 "traddr": "10.0.0.2", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "4420", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:22.374 "hdgst": false, 00:26:22.374 "ddgst": false 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 },{ 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme1", 00:26:22.374 "trtype": "tcp", 00:26:22.374 "traddr": "10.0.0.2", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "4420", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:22.374 "hdgst": false, 00:26:22.374 "ddgst": false 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 },{ 00:26:22.374 "params": { 00:26:22.374 "name": "Nvme2", 00:26:22.374 "trtype": "tcp", 00:26:22.374 "traddr": "10.0.0.2", 00:26:22.374 "adrfam": "ipv4", 00:26:22.374 "trsvcid": "4420", 00:26:22.374 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:22.374 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:22.374 "hdgst": false, 00:26:22.374 "ddgst": false 00:26:22.374 }, 00:26:22.374 "method": "bdev_nvme_attach_controller" 00:26:22.374 }' 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:22.374 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:22.375 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:22.375 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:22.375 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:22.375 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:22.375 13:05:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:22.633 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:22.633 ... 00:26:22.633 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:22.633 ... 00:26:22.633 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:22.633 ... 00:26:22.633 fio-3.35 00:26:22.633 Starting 24 threads 00:26:22.633 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.854 00:26:34.854 filename0: (groupid=0, jobs=1): err= 0: pid=3512133: Mon Jul 15 13:05:51 2024 00:26:34.854 read: IOPS=67, BW=270KiB/s (277kB/s)(2752KiB/10175msec) 00:26:34.854 slat (usec): min=12, max=105, avg=59.84, stdev=21.96 00:26:34.854 clat (msec): min=127, max=395, avg=235.37, stdev=39.07 00:26:34.854 lat (msec): min=127, max=395, avg=235.43, stdev=39.08 00:26:34.854 clat percentiles (msec): 00:26:34.854 | 1.00th=[ 140], 5.00th=[ 171], 10.00th=[ 176], 20.00th=[ 197], 00:26:34.854 | 30.00th=[ 218], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:26:34.854 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 279], 00:26:34.854 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.854 | 99.99th=[ 397] 00:26:34.854 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=268.80, stdev=57.48, samples=20 00:26:34.854 iops : min= 32, max= 96, avg=67.20, stdev=14.37, samples=20 00:26:34.854 lat (msec) : 250=59.16%, 500=40.84% 00:26:34.854 cpu : usr=98.19%, sys=1.40%, ctx=9, majf=0, minf=30 00:26:34.854 IO depths : 1=1.3%, 2=7.6%, 4=25.0%, 8=54.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:34.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.854 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.854 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.854 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.854 filename0: (groupid=0, jobs=1): err= 0: pid=3512134: Mon Jul 15 13:05:51 2024 00:26:34.854 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10137msec) 00:26:34.854 slat (usec): min=5, max=103, avg=64.23, stdev=17.42 00:26:34.854 clat (msec): min=172, max=376, avg=240.88, stdev=33.26 00:26:34.854 lat (msec): min=172, max=376, avg=240.94, stdev=33.27 00:26:34.854 clat percentiles (msec): 00:26:34.854 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 211], 00:26:34.854 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:26:34.854 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 288], 00:26:34.854 | 99.00th=[ 309], 99.50th=[ 342], 99.90th=[ 376], 99.95th=[ 376], 00:26:34.854 | 99.99th=[ 376] 00:26:34.855 bw ( KiB/s): min= 144, max= 384, per=3.89%, avg=262.40, stdev=46.55, samples=20 00:26:34.855 iops : min= 36, max= 96, avg=65.60, stdev=11.64, samples=20 00:26:34.855 lat (msec) : 250=56.25%, 500=43.75% 00:26:34.855 cpu : usr=98.13%, sys=1.45%, ctx=12, majf=0, minf=29 00:26:34.855 IO depths : 1=1.0%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512135: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=89, BW=356KiB/s (365kB/s)(3632KiB/10194msec) 00:26:34.855 slat (nsec): min=5381, max=74168, avg=22933.65, stdev=20357.11 00:26:34.855 clat (msec): min=85, max=311, avg=178.43, stdev=30.60 00:26:34.855 lat (msec): min=85, max=311, avg=178.45, stdev=30.60 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 86], 5.00th=[ 116], 10.00th=[ 155], 20.00th=[ 163], 00:26:34.855 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 176], 60.00th=[ 180], 00:26:34.855 | 70.00th=[ 192], 80.00th=[ 199], 90.00th=[ 218], 95.00th=[ 220], 00:26:34.855 | 99.00th=[ 257], 99.50th=[ 313], 99.90th=[ 313], 99.95th=[ 313], 00:26:34.855 | 99.99th=[ 313] 00:26:34.855 bw ( KiB/s): min= 256, max= 432, per=5.29%, avg=356.80, stdev=50.36, samples=20 00:26:34.855 iops : min= 64, max= 108, avg=89.20, stdev=12.59, samples=20 00:26:34.855 lat (msec) : 100=3.52%, 250=95.37%, 500=1.10% 00:26:34.855 cpu : usr=97.92%, sys=1.55%, ctx=38, majf=0, minf=33 00:26:34.855 IO depths : 1=1.2%, 2=3.3%, 4=12.4%, 8=71.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=90.5%, 8=4.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512136: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=88, BW=354KiB/s (362kB/s)(3608KiB/10196msec) 00:26:34.855 slat (nsec): min=4548, max=81464, avg=21504.34, stdev=17654.16 00:26:34.855 clat (msec): min=77, max=272, avg=179.65, stdev=34.71 00:26:34.855 lat (msec): min=77, max=272, avg=179.68, stdev=34.72 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 87], 5.00th=[ 128], 10.00th=[ 144], 20.00th=[ 161], 00:26:34.855 | 30.00th=[ 165], 40.00th=[ 171], 50.00th=[ 176], 60.00th=[ 180], 00:26:34.855 | 70.00th=[ 194], 80.00th=[ 211], 90.00th=[ 224], 95.00th=[ 234], 00:26:34.855 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:26:34.855 | 99.99th=[ 271] 00:26:34.855 bw ( KiB/s): min= 256, max= 432, per=5.26%, avg=354.40, stdev=45.04, samples=20 00:26:34.855 iops : min= 64, max= 108, avg=88.60, stdev=11.26, samples=20 00:26:34.855 lat (msec) : 100=3.55%, 250=92.24%, 500=4.21% 00:26:34.855 cpu : usr=97.85%, sys=1.70%, ctx=31, majf=0, minf=25 00:26:34.855 IO depths : 1=1.0%, 2=3.3%, 4=12.9%, 8=71.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=90.6%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512137: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10121msec) 00:26:34.855 slat (usec): min=5, max=107, avg=46.97, stdev=24.18 00:26:34.855 clat (msec): min=118, max=396, avg=240.58, stdev=44.94 00:26:34.855 lat (msec): min=118, max=396, avg=240.63, stdev=44.93 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 120], 5.00th=[ 176], 10.00th=[ 176], 20.00th=[ 201], 00:26:34.855 | 30.00th=[ 220], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:26:34.855 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 292], 95.00th=[ 309], 00:26:34.855 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.855 | 99.99th=[ 397] 00:26:34.855 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=48.53, samples=20 00:26:34.855 iops : min= 32, max= 96, avg=65.60, stdev=12.13, samples=20 00:26:34.855 lat (msec) : 250=54.76%, 500=45.24% 00:26:34.855 cpu : usr=98.18%, sys=1.40%, ctx=22, majf=0, minf=29 00:26:34.855 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512138: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10122msec) 00:26:34.855 slat (usec): min=17, max=103, avg=62.38, stdev=17.11 00:26:34.855 clat (msec): min=173, max=293, avg=240.46, stdev=31.08 00:26:34.855 lat (msec): min=173, max=293, avg=240.52, stdev=31.09 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 211], 00:26:34.855 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:26:34.855 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 279], 00:26:34.855 | 99.00th=[ 292], 99.50th=[ 292], 99.90th=[ 292], 99.95th=[ 292], 00:26:34.855 | 99.99th=[ 292] 00:26:34.855 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=50.44, samples=20 00:26:34.855 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:34.855 lat (msec) : 250=58.93%, 500=41.07% 00:26:34.855 cpu : usr=98.01%, sys=1.55%, ctx=124, majf=0, minf=33 00:26:34.855 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512139: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10160msec) 00:26:34.855 slat (usec): min=10, max=104, avg=59.31, stdev=22.48 00:26:34.855 clat (msec): min=96, max=342, avg=241.37, stdev=36.91 00:26:34.855 lat (msec): min=96, max=342, avg=241.43, stdev=36.92 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 211], 00:26:34.855 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:26:34.855 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 292], 00:26:34.855 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:26:34.855 | 99.99th=[ 342] 00:26:34.855 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=65.54, samples=20 00:26:34.855 iops : min= 32, max= 96, avg=65.60, stdev=16.38, samples=20 00:26:34.855 lat (msec) : 100=0.30%, 250=53.57%, 500=46.13% 00:26:34.855 cpu : usr=98.24%, sys=1.32%, ctx=22, majf=0, minf=40 00:26:34.855 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename0: (groupid=0, jobs=1): err= 0: pid=3512140: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=69, BW=276KiB/s (283kB/s)(2816KiB/10197msec) 00:26:34.855 slat (nsec): min=5278, max=96724, avg=61854.68, stdev=15044.75 00:26:34.855 clat (msec): min=85, max=398, avg=231.14, stdev=55.79 00:26:34.855 lat (msec): min=85, max=398, avg=231.20, stdev=55.80 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 86], 5.00th=[ 118], 10.00th=[ 169], 20.00th=[ 178], 00:26:34.855 | 30.00th=[ 211], 40.00th=[ 234], 50.00th=[ 247], 60.00th=[ 251], 00:26:34.855 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 296], 00:26:34.855 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.855 | 99.99th=[ 397] 00:26:34.855 bw ( KiB/s): min= 144, max= 384, per=4.08%, avg=275.20, stdev=59.55, samples=20 00:26:34.855 iops : min= 36, max= 96, avg=68.80, stdev=14.89, samples=20 00:26:34.855 lat (msec) : 100=4.26%, 250=55.11%, 500=40.62% 00:26:34.855 cpu : usr=97.77%, sys=1.80%, ctx=16, majf=0, minf=34 00:26:34.855 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename1: (groupid=0, jobs=1): err= 0: pid=3512141: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=67, BW=270KiB/s (276kB/s)(2744KiB/10175msec) 00:26:34.855 slat (usec): min=17, max=144, avg=65.94, stdev=15.19 00:26:34.855 clat (msec): min=96, max=398, avg=236.31, stdev=47.46 00:26:34.855 lat (msec): min=96, max=398, avg=236.38, stdev=47.47 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 140], 5.00th=[ 148], 10.00th=[ 176], 20.00th=[ 192], 00:26:34.855 | 30.00th=[ 213], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:26:34.855 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 296], 00:26:34.855 | 99.00th=[ 388], 99.50th=[ 393], 99.90th=[ 401], 99.95th=[ 401], 00:26:34.855 | 99.99th=[ 401] 00:26:34.855 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=268.00, stdev=55.88, samples=20 00:26:34.855 iops : min= 32, max= 96, avg=67.00, stdev=13.97, samples=20 00:26:34.855 lat (msec) : 100=0.29%, 250=57.58%, 500=42.13% 00:26:34.855 cpu : usr=98.33%, sys=1.24%, ctx=12, majf=0, minf=50 00:26:34.855 IO depths : 1=3.4%, 2=9.6%, 4=25.1%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:26:34.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.855 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.855 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.855 filename1: (groupid=0, jobs=1): err= 0: pid=3512142: Mon Jul 15 13:05:51 2024 00:26:34.855 read: IOPS=69, BW=276KiB/s (283kB/s)(2816KiB/10191msec) 00:26:34.855 slat (nsec): min=3890, max=82574, avg=35712.89, stdev=16295.59 00:26:34.855 clat (msec): min=93, max=346, avg=231.30, stdev=45.24 00:26:34.855 lat (msec): min=93, max=346, avg=231.33, stdev=45.24 00:26:34.855 clat percentiles (msec): 00:26:34.855 | 1.00th=[ 94], 5.00th=[ 157], 10.00th=[ 171], 20.00th=[ 197], 00:26:34.855 | 30.00th=[ 213], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 249], 00:26:34.855 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 288], 00:26:34.855 | 99.00th=[ 321], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 347], 00:26:34.855 | 99.99th=[ 347] 00:26:34.855 bw ( KiB/s): min= 128, max= 384, per=4.08%, avg=275.20, stdev=73.89, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=68.80, stdev=18.47, samples=20 00:26:34.856 lat (msec) : 100=2.27%, 250=58.66%, 500=39.06% 00:26:34.856 cpu : usr=98.14%, sys=1.40%, ctx=25, majf=0, minf=37 00:26:34.856 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512143: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=66, BW=266KiB/s (272kB/s)(2688KiB/10122msec) 00:26:34.856 slat (nsec): min=9290, max=76313, avg=29523.29, stdev=13737.40 00:26:34.856 clat (msec): min=157, max=357, avg=240.74, stdev=37.94 00:26:34.856 lat (msec): min=157, max=357, avg=240.77, stdev=37.93 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 157], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 211], 00:26:34.856 | 30.00th=[ 218], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:26:34.856 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 279], 95.00th=[ 296], 00:26:34.856 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 359], 99.95th=[ 359], 00:26:34.856 | 99.99th=[ 359] 00:26:34.856 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=50.44, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:34.856 lat (msec) : 250=55.65%, 500=44.35% 00:26:34.856 cpu : usr=98.30%, sys=1.31%, ctx=18, majf=0, minf=41 00:26:34.856 IO depths : 1=4.8%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512144: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=70, BW=282KiB/s (289kB/s)(2880KiB/10197msec) 00:26:34.856 slat (nsec): min=8641, max=53071, avg=24424.48, stdev=6430.29 00:26:34.856 clat (msec): min=85, max=366, avg=226.28, stdev=47.68 00:26:34.856 lat (msec): min=85, max=366, avg=226.30, stdev=47.69 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 86], 5.00th=[ 128], 10.00th=[ 169], 20.00th=[ 190], 00:26:34.856 | 30.00th=[ 211], 40.00th=[ 220], 50.00th=[ 243], 60.00th=[ 249], 00:26:34.856 | 70.00th=[ 253], 80.00th=[ 264], 90.00th=[ 268], 95.00th=[ 271], 00:26:34.856 | 99.00th=[ 309], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 368], 00:26:34.856 | 99.99th=[ 368] 00:26:34.856 bw ( KiB/s): min= 240, max= 432, per=4.17%, avg=281.60, stdev=55.28, samples=20 00:26:34.856 iops : min= 60, max= 108, avg=70.40, stdev=13.82, samples=20 00:26:34.856 lat (msec) : 100=4.44%, 250=60.83%, 500=34.72% 00:26:34.856 cpu : usr=97.94%, sys=1.46%, ctx=48, majf=0, minf=38 00:26:34.856 IO depths : 1=5.0%, 2=11.1%, 4=24.6%, 8=51.8%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512145: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10159msec) 00:26:34.856 slat (nsec): min=8373, max=82697, avg=28538.54, stdev=22336.83 00:26:34.856 clat (msec): min=173, max=308, avg=241.60, stdev=32.90 00:26:34.856 lat (msec): min=173, max=308, avg=241.63, stdev=32.88 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 176], 5.00th=[ 176], 10.00th=[ 194], 20.00th=[ 211], 00:26:34.856 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:26:34.856 | 70.00th=[ 262], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 288], 00:26:34.856 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:26:34.856 | 99.99th=[ 309] 00:26:34.856 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=50.44, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=65.60, stdev=12.61, samples=20 00:26:34.856 lat (msec) : 250=54.76%, 500=45.24% 00:26:34.856 cpu : usr=98.16%, sys=1.44%, ctx=14, majf=0, minf=34 00:26:34.856 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512146: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10155msec) 00:26:34.856 slat (usec): min=8, max=104, avg=47.86, stdev=23.73 00:26:34.856 clat (msec): min=90, max=348, avg=241.35, stdev=37.20 00:26:34.856 lat (msec): min=90, max=348, avg=241.40, stdev=37.19 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 188], 20.00th=[ 211], 00:26:34.856 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:26:34.856 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 292], 00:26:34.856 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:26:34.856 | 99.99th=[ 351] 00:26:34.856 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=65.33, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=65.60, stdev=16.33, samples=20 00:26:34.856 lat (msec) : 100=0.30%, 250=56.10%, 500=43.60% 00:26:34.856 cpu : usr=98.19%, sys=1.39%, ctx=38, majf=0, minf=32 00:26:34.856 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512147: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10156msec) 00:26:34.856 slat (usec): min=3, max=103, avg=48.36, stdev=16.75 00:26:34.856 clat (msec): min=85, max=387, avg=228.00, stdev=50.49 00:26:34.856 lat (msec): min=85, max=387, avg=228.05, stdev=50.50 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 87], 5.00th=[ 129], 10.00th=[ 169], 20.00th=[ 190], 00:26:34.856 | 30.00th=[ 211], 40.00th=[ 220], 50.00th=[ 245], 60.00th=[ 249], 00:26:34.856 | 70.00th=[ 255], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 288], 00:26:34.856 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 388], 99.95th=[ 388], 00:26:34.856 | 99.99th=[ 388] 00:26:34.856 bw ( KiB/s): min= 144, max= 512, per=4.08%, avg=275.20, stdev=72.60, samples=20 00:26:34.856 iops : min= 36, max= 128, avg=68.80, stdev=18.15, samples=20 00:26:34.856 lat (msec) : 100=4.26%, 250=56.82%, 500=38.92% 00:26:34.856 cpu : usr=98.04%, sys=1.53%, ctx=15, majf=0, minf=53 00:26:34.856 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename1: (groupid=0, jobs=1): err= 0: pid=3512148: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=66, BW=265KiB/s (272kB/s)(2688KiB/10129msec) 00:26:34.856 slat (usec): min=17, max=100, avg=64.30, stdev=13.88 00:26:34.856 clat (msec): min=128, max=356, avg=240.61, stdev=36.34 00:26:34.856 lat (msec): min=128, max=356, avg=240.67, stdev=36.35 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 159], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 211], 00:26:34.856 | 30.00th=[ 218], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:26:34.856 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 292], 00:26:34.856 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:26:34.856 | 99.99th=[ 355] 00:26:34.856 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=48.53, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=65.60, stdev=12.13, samples=20 00:26:34.856 lat (msec) : 250=55.21%, 500=44.79% 00:26:34.856 cpu : usr=97.94%, sys=1.66%, ctx=14, majf=0, minf=27 00:26:34.856 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename2: (groupid=0, jobs=1): err= 0: pid=3512149: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=95, BW=383KiB/s (392kB/s)(3904KiB/10193msec) 00:26:34.856 slat (nsec): min=4489, max=84960, avg=14189.25, stdev=13862.17 00:26:34.856 clat (msec): min=85, max=285, avg=166.38, stdev=34.51 00:26:34.856 lat (msec): min=85, max=285, avg=166.39, stdev=34.51 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 87], 5.00th=[ 109], 10.00th=[ 118], 20.00th=[ 140], 00:26:34.856 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 176], 00:26:34.856 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 203], 95.00th=[ 228], 00:26:34.856 | 99.00th=[ 266], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:26:34.856 | 99.99th=[ 284] 00:26:34.856 bw ( KiB/s): min= 304, max= 512, per=5.70%, avg=384.00, stdev=54.44, samples=20 00:26:34.856 iops : min= 76, max= 128, avg=96.00, stdev=13.61, samples=20 00:26:34.856 lat (msec) : 100=3.07%, 250=94.06%, 500=2.87% 00:26:34.856 cpu : usr=97.96%, sys=1.65%, ctx=15, majf=0, minf=48 00:26:34.856 IO depths : 1=2.8%, 2=5.8%, 4=15.2%, 8=66.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:34.856 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.856 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.856 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.856 filename2: (groupid=0, jobs=1): err= 0: pid=3512150: Mon Jul 15 13:05:51 2024 00:26:34.856 read: IOPS=67, BW=270KiB/s (276kB/s)(2744KiB/10175msec) 00:26:34.856 slat (usec): min=12, max=108, avg=64.21, stdev=20.38 00:26:34.856 clat (msec): min=102, max=397, avg=236.28, stdev=37.76 00:26:34.856 lat (msec): min=102, max=397, avg=236.35, stdev=37.77 00:26:34.856 clat percentiles (msec): 00:26:34.856 | 1.00th=[ 140], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 209], 00:26:34.856 | 30.00th=[ 220], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:26:34.856 | 70.00th=[ 259], 80.00th=[ 266], 90.00th=[ 271], 95.00th=[ 279], 00:26:34.856 | 99.00th=[ 296], 99.50th=[ 347], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.856 | 99.99th=[ 397] 00:26:34.856 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=268.00, stdev=57.54, samples=20 00:26:34.856 iops : min= 32, max= 96, avg=67.00, stdev=14.39, samples=20 00:26:34.856 lat (msec) : 250=58.45%, 500=41.55% 00:26:34.856 cpu : usr=98.19%, sys=1.39%, ctx=13, majf=0, minf=30 00:26:34.856 IO depths : 1=5.7%, 2=12.0%, 4=25.1%, 8=50.6%, 16=6.7%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512151: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=67, BW=270KiB/s (277kB/s)(2744KiB/10161msec) 00:26:34.857 slat (nsec): min=8462, max=79879, avg=32119.34, stdev=13191.21 00:26:34.857 clat (msec): min=128, max=396, avg=236.47, stdev=38.54 00:26:34.857 lat (msec): min=128, max=396, avg=236.50, stdev=38.54 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 178], 20.00th=[ 199], 00:26:34.857 | 30.00th=[ 218], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 249], 00:26:34.857 | 70.00th=[ 264], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 279], 00:26:34.857 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.857 | 99.99th=[ 397] 00:26:34.857 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=268.00, stdev=53.92, samples=20 00:26:34.857 iops : min= 32, max= 96, avg=67.00, stdev=13.48, samples=20 00:26:34.857 lat (msec) : 250=60.64%, 500=39.36% 00:26:34.857 cpu : usr=97.53%, sys=1.84%, ctx=72, majf=0, minf=31 00:26:34.857 IO depths : 1=2.2%, 2=8.5%, 4=25.1%, 8=54.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512152: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10155msec) 00:26:34.857 slat (usec): min=23, max=100, avg=68.15, stdev=12.07 00:26:34.857 clat (msec): min=116, max=398, avg=241.19, stdev=43.38 00:26:34.857 lat (msec): min=116, max=398, avg=241.26, stdev=43.38 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 140], 5.00th=[ 176], 10.00th=[ 176], 20.00th=[ 209], 00:26:34.857 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:26:34.857 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 288], 95.00th=[ 305], 00:26:34.857 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 401], 99.95th=[ 401], 00:26:34.857 | 99.99th=[ 401] 00:26:34.857 bw ( KiB/s): min= 144, max= 384, per=3.89%, avg=262.40, stdev=49.08, samples=20 00:26:34.857 iops : min= 36, max= 96, avg=65.60, stdev=12.27, samples=20 00:26:34.857 lat (msec) : 250=54.61%, 500=45.39% 00:26:34.857 cpu : usr=98.28%, sys=1.31%, ctx=10, majf=0, minf=40 00:26:34.857 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512153: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=69, BW=277KiB/s (284kB/s)(2816KiB/10155msec) 00:26:34.857 slat (usec): min=7, max=114, avg=56.88, stdev=15.51 00:26:34.857 clat (msec): min=50, max=395, avg=227.91, stdev=52.90 00:26:34.857 lat (msec): min=50, max=395, avg=227.97, stdev=52.90 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 51], 5.00th=[ 142], 10.00th=[ 174], 20.00th=[ 188], 00:26:34.857 | 30.00th=[ 211], 40.00th=[ 220], 50.00th=[ 245], 60.00th=[ 249], 00:26:34.857 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 288], 00:26:34.857 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:26:34.857 | 99.99th=[ 397] 00:26:34.857 bw ( KiB/s): min= 128, max= 513, per=4.08%, avg=275.25, stdev=74.06, samples=20 00:26:34.857 iops : min= 32, max= 128, avg=68.80, stdev=18.47, samples=20 00:26:34.857 lat (msec) : 100=4.55%, 250=57.24%, 500=38.21% 00:26:34.857 cpu : usr=98.15%, sys=1.43%, ctx=15, majf=0, minf=50 00:26:34.857 IO depths : 1=3.3%, 2=9.4%, 4=24.4%, 8=53.7%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512154: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10161msec) 00:26:34.857 slat (nsec): min=8234, max=93120, avg=53239.53, stdev=22122.63 00:26:34.857 clat (msec): min=165, max=348, avg=241.44, stdev=36.72 00:26:34.857 lat (msec): min=165, max=348, avg=241.50, stdev=36.71 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 165], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 211], 00:26:34.857 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:26:34.857 | 70.00th=[ 262], 80.00th=[ 268], 90.00th=[ 275], 95.00th=[ 292], 00:26:34.857 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 351], 00:26:34.857 | 99.99th=[ 351] 00:26:34.857 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=65.54, samples=20 00:26:34.857 iops : min= 32, max= 96, avg=65.60, stdev=16.38, samples=20 00:26:34.857 lat (msec) : 250=56.10%, 500=43.90% 00:26:34.857 cpu : usr=98.03%, sys=1.56%, ctx=30, majf=0, minf=38 00:26:34.857 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512155: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=66, BW=265KiB/s (271kB/s)(2688KiB/10160msec) 00:26:34.857 slat (usec): min=17, max=102, avg=66.32, stdev=13.82 00:26:34.857 clat (msec): min=90, max=365, avg=241.35, stdev=42.30 00:26:34.857 lat (msec): min=90, max=365, avg=241.41, stdev=42.30 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 157], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 211], 00:26:34.857 | 30.00th=[ 220], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 253], 00:26:34.857 | 70.00th=[ 266], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 321], 00:26:34.857 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:26:34.857 | 99.99th=[ 368] 00:26:34.857 bw ( KiB/s): min= 128, max= 384, per=3.89%, avg=262.40, stdev=62.16, samples=20 00:26:34.857 iops : min= 32, max= 96, avg=65.60, stdev=15.54, samples=20 00:26:34.857 lat (msec) : 100=0.30%, 250=54.91%, 500=44.79% 00:26:34.857 cpu : usr=98.16%, sys=1.44%, ctx=13, majf=0, minf=29 00:26:34.857 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 filename2: (groupid=0, jobs=1): err= 0: pid=3512156: Mon Jul 15 13:05:51 2024 00:26:34.857 read: IOPS=69, BW=277KiB/s (283kB/s)(2816KiB/10175msec) 00:26:34.857 slat (nsec): min=8945, max=93052, avg=26596.61, stdev=9231.79 00:26:34.857 clat (msec): min=136, max=343, avg=231.00, stdev=39.86 00:26:34.857 lat (msec): min=136, max=343, avg=231.02, stdev=39.86 00:26:34.857 clat percentiles (msec): 00:26:34.857 | 1.00th=[ 138], 5.00th=[ 157], 10.00th=[ 176], 20.00th=[ 194], 00:26:34.857 | 30.00th=[ 211], 40.00th=[ 234], 50.00th=[ 245], 60.00th=[ 249], 00:26:34.857 | 70.00th=[ 255], 80.00th=[ 266], 90.00th=[ 268], 95.00th=[ 288], 00:26:34.857 | 99.00th=[ 313], 99.50th=[ 338], 99.90th=[ 342], 99.95th=[ 342], 00:26:34.857 | 99.99th=[ 342] 00:26:34.857 bw ( KiB/s): min= 240, max= 384, per=4.08%, avg=275.20, stdev=45.14, samples=20 00:26:34.857 iops : min= 60, max= 96, avg=68.80, stdev=11.28, samples=20 00:26:34.857 lat (msec) : 250=63.07%, 500=36.93% 00:26:34.857 cpu : usr=97.24%, sys=1.96%, ctx=110, majf=0, minf=39 00:26:34.857 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:34.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.857 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.857 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:34.857 00:26:34.857 Run status group 0 (all jobs): 00:26:34.857 READ: bw=6733KiB/s (6895kB/s), 265KiB/s-383KiB/s (271kB/s-392kB/s), io=67.0MiB (70.3MB), run=10121-10197msec 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.857 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 bdev_null0 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 [2024-07-15 13:05:51.959686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 bdev_null1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.858 { 00:26:34.858 "params": { 00:26:34.858 "name": "Nvme$subsystem", 00:26:34.858 "trtype": "$TEST_TRANSPORT", 00:26:34.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.858 "adrfam": "ipv4", 00:26:34.858 "trsvcid": "$NVMF_PORT", 00:26:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.858 "hdgst": ${hdgst:-false}, 00:26:34.858 "ddgst": ${ddgst:-false} 00:26:34.858 }, 00:26:34.858 "method": "bdev_nvme_attach_controller" 00:26:34.858 } 00:26:34.858 EOF 00:26:34.858 )") 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:34.858 13:05:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.858 { 00:26:34.858 "params": { 00:26:34.858 "name": "Nvme$subsystem", 00:26:34.858 "trtype": "$TEST_TRANSPORT", 00:26:34.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.858 "adrfam": "ipv4", 00:26:34.858 "trsvcid": "$NVMF_PORT", 00:26:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.858 "hdgst": ${hdgst:-false}, 00:26:34.858 "ddgst": ${ddgst:-false} 00:26:34.858 }, 00:26:34.858 "method": "bdev_nvme_attach_controller" 00:26:34.858 } 00:26:34.858 EOF 00:26:34.858 )") 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:34.858 "params": { 00:26:34.858 "name": "Nvme0", 00:26:34.858 "trtype": "tcp", 00:26:34.858 "traddr": "10.0.0.2", 00:26:34.858 "adrfam": "ipv4", 00:26:34.858 "trsvcid": "4420", 00:26:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.858 "hdgst": false, 00:26:34.858 "ddgst": false 00:26:34.858 }, 00:26:34.858 "method": "bdev_nvme_attach_controller" 00:26:34.858 },{ 00:26:34.858 "params": { 00:26:34.858 "name": "Nvme1", 00:26:34.858 "trtype": "tcp", 00:26:34.858 "traddr": "10.0.0.2", 00:26:34.858 "adrfam": "ipv4", 00:26:34.858 "trsvcid": "4420", 00:26:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:34.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:34.858 "hdgst": false, 00:26:34.858 "ddgst": false 00:26:34.858 }, 00:26:34.858 "method": "bdev_nvme_attach_controller" 00:26:34.858 }' 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:34.858 13:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.858 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:34.858 ... 00:26:34.859 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:34.859 ... 00:26:34.859 fio-3.35 00:26:34.859 Starting 4 threads 00:26:34.859 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.119 00:26:40.119 filename0: (groupid=0, jobs=1): err= 0: pid=3513536: Mon Jul 15 13:05:58 2024 00:26:40.119 read: IOPS=2000, BW=15.6MiB/s (16.4MB/s)(78.2MiB/5001msec) 00:26:40.119 slat (nsec): min=5247, max=68260, avg=19773.38, stdev=8877.69 00:26:40.119 clat (usec): min=731, max=7292, avg=3922.66, stdev=396.82 00:26:40.119 lat (usec): min=744, max=7307, avg=3942.43, stdev=397.13 00:26:40.119 clat percentiles (usec): 00:26:40.119 | 1.00th=[ 2704], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3752], 00:26:40.119 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3949], 00:26:40.119 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:26:40.119 | 99.00th=[ 5211], 99.50th=[ 6259], 99.90th=[ 6980], 99.95th=[ 7177], 00:26:40.119 | 99.99th=[ 7308] 00:26:40.119 bw ( KiB/s): min=15488, max=16784, per=25.00%, avg=15996.80, stdev=373.09, samples=10 00:26:40.119 iops : min= 1936, max= 2098, avg=1999.60, stdev=46.64, samples=10 00:26:40.119 lat (usec) : 750=0.01%, 1000=0.07% 00:26:40.119 lat (msec) : 2=0.64%, 4=65.77%, 10=33.51% 00:26:40.119 cpu : usr=89.54%, sys=7.66%, ctx=613, majf=0, minf=9 00:26:40.119 IO depths : 1=0.1%, 2=22.8%, 4=51.7%, 8=25.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 issued rwts: total=10004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.120 filename0: (groupid=0, jobs=1): err= 0: pid=3513537: Mon Jul 15 13:05:58 2024 00:26:40.120 read: IOPS=1999, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5002msec) 00:26:40.120 slat (usec): min=6, max=132, avg=21.03, stdev=10.40 00:26:40.120 clat (usec): min=826, max=7103, avg=3914.28, stdev=339.23 00:26:40.120 lat (usec): min=857, max=7112, avg=3935.31, stdev=339.75 00:26:40.120 clat percentiles (usec): 00:26:40.120 | 1.00th=[ 2999], 5.00th=[ 3621], 10.00th=[ 3687], 20.00th=[ 3752], 00:26:40.120 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3949], 00:26:40.120 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4178], 95.00th=[ 4293], 00:26:40.120 | 99.00th=[ 4621], 99.50th=[ 5997], 99.90th=[ 6587], 99.95th=[ 6652], 00:26:40.120 | 99.99th=[ 6980] 00:26:40.120 bw ( KiB/s): min=15488, max=16672, per=25.03%, avg=16017.78, stdev=371.50, samples=9 00:26:40.120 iops : min= 1936, max= 2084, avg=2002.22, stdev=46.44, samples=9 00:26:40.120 lat (usec) : 1000=0.03% 00:26:40.120 lat (msec) : 2=0.51%, 4=68.31%, 10=31.15% 00:26:40.120 cpu : usr=81.42%, sys=10.88%, ctx=250, majf=0, minf=9 00:26:40.120 IO depths : 1=0.7%, 2=21.8%, 4=52.8%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 issued rwts: total=10002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.120 filename1: (groupid=0, jobs=1): err= 0: pid=3513538: Mon Jul 15 13:05:58 2024 00:26:40.120 read: IOPS=1995, BW=15.6MiB/s (16.3MB/s)(78.0MiB/5001msec) 00:26:40.120 slat (nsec): min=6569, max=67308, avg=18602.41, stdev=7738.73 00:26:40.120 clat (usec): min=955, max=7125, avg=3939.59, stdev=319.48 00:26:40.120 lat (usec): min=969, max=7140, avg=3958.20, stdev=319.88 00:26:40.120 clat percentiles (usec): 00:26:40.120 | 1.00th=[ 3326], 5.00th=[ 3654], 10.00th=[ 3720], 20.00th=[ 3785], 00:26:40.120 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3982], 00:26:40.120 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4293], 00:26:40.120 | 99.00th=[ 4752], 99.50th=[ 5735], 99.90th=[ 6587], 99.95th=[ 6718], 00:26:40.120 | 99.99th=[ 7111] 00:26:40.120 bw ( KiB/s): min=15408, max=16608, per=24.97%, avg=15982.22, stdev=374.59, samples=9 00:26:40.120 iops : min= 1926, max= 2076, avg=1997.78, stdev=46.82, samples=9 00:26:40.120 lat (usec) : 1000=0.01% 00:26:40.120 lat (msec) : 2=0.35%, 4=65.20%, 10=34.44% 00:26:40.120 cpu : usr=94.38%, sys=5.04%, ctx=8, majf=0, minf=9 00:26:40.120 IO depths : 1=0.3%, 2=20.8%, 4=53.9%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 complete : 0=0.0%, 4=90.2%, 8=9.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 issued rwts: total=9980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.120 filename1: (groupid=0, jobs=1): err= 0: pid=3513539: Mon Jul 15 13:05:58 2024 00:26:40.120 read: IOPS=2005, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5003msec) 00:26:40.120 slat (nsec): min=3977, max=59828, avg=17314.31, stdev=8308.02 00:26:40.120 clat (usec): min=1123, max=7106, avg=3935.53, stdev=274.79 00:26:40.120 lat (usec): min=1147, max=7126, avg=3952.84, stdev=275.32 00:26:40.120 clat percentiles (usec): 00:26:40.120 | 1.00th=[ 3097], 5.00th=[ 3621], 10.00th=[ 3720], 20.00th=[ 3785], 00:26:40.120 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3982], 00:26:40.120 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4293], 00:26:40.120 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 5669], 99.95th=[ 5997], 00:26:40.120 | 99.99th=[ 7046] 00:26:40.120 bw ( KiB/s): min=15488, max=17072, per=25.07%, avg=16043.20, stdev=478.94, samples=10 00:26:40.120 iops : min= 1936, max= 2134, avg=2005.40, stdev=59.87, samples=10 00:26:40.120 lat (msec) : 2=0.31%, 4=62.70%, 10=36.99% 00:26:40.120 cpu : usr=92.92%, sys=5.86%, ctx=75, majf=0, minf=9 00:26:40.120 IO depths : 1=0.1%, 2=9.6%, 4=63.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:40.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:40.120 issued rwts: total=10035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:40.120 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:40.120 00:26:40.120 Run status group 0 (all jobs): 00:26:40.120 READ: bw=62.5MiB/s (65.5MB/s), 15.6MiB/s-15.7MiB/s (16.3MB/s-16.4MB/s), io=313MiB (328MB), run=5001-5003msec 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 00:26:40.120 real 0m24.288s 00:26:40.120 user 4m34.961s 00:26:40.120 sys 0m7.323s 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 ************************************ 00:26:40.120 END TEST fio_dif_rand_params 00:26:40.120 ************************************ 00:26:40.120 13:05:58 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:40.120 13:05:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:40.120 13:05:58 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:40.120 13:05:58 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 ************************************ 00:26:40.120 START TEST fio_dif_digest 00:26:40.120 ************************************ 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 bdev_null0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.120 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:40.120 [2024-07-15 13:05:58.323696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:40.379 { 00:26:40.379 "params": { 00:26:40.379 "name": "Nvme$subsystem", 00:26:40.379 "trtype": "$TEST_TRANSPORT", 00:26:40.379 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:40.379 "adrfam": "ipv4", 00:26:40.379 "trsvcid": "$NVMF_PORT", 00:26:40.379 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:40.379 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:40.379 "hdgst": ${hdgst:-false}, 00:26:40.379 "ddgst": ${ddgst:-false} 00:26:40.379 }, 00:26:40.379 "method": "bdev_nvme_attach_controller" 00:26:40.379 } 00:26:40.379 EOF 00:26:40.379 )") 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:40.379 "params": { 00:26:40.379 "name": "Nvme0", 00:26:40.379 "trtype": "tcp", 00:26:40.379 "traddr": "10.0.0.2", 00:26:40.379 "adrfam": "ipv4", 00:26:40.379 "trsvcid": "4420", 00:26:40.379 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:40.379 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:40.379 "hdgst": true, 00:26:40.379 "ddgst": true 00:26:40.379 }, 00:26:40.379 "method": "bdev_nvme_attach_controller" 00:26:40.379 }' 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:40.379 13:05:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:40.379 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:40.379 ... 00:26:40.379 fio-3.35 00:26:40.379 Starting 3 threads 00:26:40.638 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.847 00:26:52.847 filename0: (groupid=0, jobs=1): err= 0: pid=3514408: Mon Jul 15 13:06:09 2024 00:26:52.847 read: IOPS=200, BW=25.0MiB/s (26.3MB/s)(252MiB/10045msec) 00:26:52.847 slat (nsec): min=4693, max=40005, avg=14331.62, stdev=3850.33 00:26:52.847 clat (usec): min=11321, max=57531, avg=14938.02, stdev=1584.35 00:26:52.847 lat (usec): min=11336, max=57545, avg=14952.35, stdev=1584.37 00:26:52.847 clat percentiles (usec): 00:26:52.847 | 1.00th=[12518], 5.00th=[13304], 10.00th=[13566], 20.00th=[14091], 00:26:52.847 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:26:52.847 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:26:52.847 | 99.00th=[17695], 99.50th=[17957], 99.90th=[22938], 99.95th=[46924], 00:26:52.847 | 99.99th=[57410] 00:26:52.847 bw ( KiB/s): min=25088, max=26368, per=32.62%, avg=25728.00, stdev=419.42, samples=20 00:26:52.847 iops : min= 196, max= 206, avg=201.00, stdev= 3.28, samples=20 00:26:52.847 lat (msec) : 20=99.75%, 50=0.20%, 100=0.05% 00:26:52.847 cpu : usr=89.87%, sys=9.63%, ctx=34, majf=0, minf=125 00:26:52.847 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.847 filename0: (groupid=0, jobs=1): err= 0: pid=3514409: Mon Jul 15 13:06:09 2024 00:26:52.847 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(266MiB/10007msec) 00:26:52.847 slat (nsec): min=4429, max=67662, avg=14727.87, stdev=4238.18 00:26:52.847 clat (usec): min=8544, max=22022, avg=14107.52, stdev=1035.08 00:26:52.847 lat (usec): min=8554, max=22053, avg=14122.25, stdev=1035.16 00:26:52.847 clat percentiles (usec): 00:26:52.847 | 1.00th=[11731], 5.00th=[12387], 10.00th=[12780], 20.00th=[13304], 00:26:52.847 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:26:52.847 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15401], 95.00th=[15795], 00:26:52.847 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19792], 99.95th=[19792], 00:26:52.847 | 99.99th=[22152] 00:26:52.847 bw ( KiB/s): min=26368, max=28928, per=34.44%, avg=27161.60, stdev=568.81, samples=20 00:26:52.847 iops : min= 206, max= 226, avg=212.20, stdev= 4.44, samples=20 00:26:52.847 lat (msec) : 10=0.09%, 20=99.86%, 50=0.05% 00:26:52.847 cpu : usr=89.40%, sys=10.09%, ctx=23, majf=0, minf=160 00:26:52.847 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 issued rwts: total=2125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.847 filename0: (groupid=0, jobs=1): err= 0: pid=3514410: Mon Jul 15 13:06:09 2024 00:26:52.847 read: IOPS=204, BW=25.6MiB/s (26.8MB/s)(257MiB/10047msec) 00:26:52.847 slat (nsec): min=4422, max=42086, avg=14543.26, stdev=3899.91 00:26:52.847 clat (usec): min=11559, max=52016, avg=14635.30, stdev=1487.40 00:26:52.847 lat (usec): min=11573, max=52030, avg=14649.84, stdev=1487.30 00:26:52.847 clat percentiles (usec): 00:26:52.847 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13304], 20.00th=[13829], 00:26:52.847 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14746], 00:26:52.847 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:26:52.847 | 99.00th=[16909], 99.50th=[17433], 99.90th=[20317], 99.95th=[46924], 00:26:52.847 | 99.99th=[52167] 00:26:52.847 bw ( KiB/s): min=25600, max=27136, per=33.29%, avg=26255.35, stdev=423.40, samples=20 00:26:52.847 iops : min= 200, max= 212, avg=205.10, stdev= 3.34, samples=20 00:26:52.847 lat (msec) : 20=99.85%, 50=0.10%, 100=0.05% 00:26:52.847 cpu : usr=89.60%, sys=9.90%, ctx=21, majf=0, minf=97 00:26:52.847 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.847 issued rwts: total=2054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.847 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:52.847 00:26:52.847 Run status group 0 (all jobs): 00:26:52.847 READ: bw=77.0MiB/s (80.8MB/s), 25.0MiB/s-26.5MiB/s (26.3MB/s-27.8MB/s), io=774MiB (811MB), run=10007-10047msec 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.847 00:26:52.847 real 0m11.054s 00:26:52.847 user 0m27.985s 00:26:52.847 sys 0m3.271s 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.847 13:06:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:52.847 ************************************ 00:26:52.847 END TEST fio_dif_digest 00:26:52.847 ************************************ 00:26:52.847 13:06:09 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:52.847 13:06:09 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:52.847 13:06:09 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.847 rmmod nvme_tcp 00:26:52.847 rmmod nvme_fabrics 00:26:52.847 rmmod nvme_keyring 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3508115 ']' 00:26:52.847 13:06:09 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3508115 00:26:52.847 13:06:09 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3508115 ']' 00:26:52.847 13:06:09 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3508115 00:26:52.847 13:06:09 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:52.847 13:06:09 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3508115 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3508115' 00:26:52.848 killing process with pid 3508115 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3508115 00:26:52.848 13:06:09 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3508115 00:26:52.848 13:06:09 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:52.848 13:06:09 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:52.848 Waiting for block devices as requested 00:26:52.848 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:52.848 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:53.106 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:53.106 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:53.106 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:53.365 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:53.365 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:53.365 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:53.365 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:53.625 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:53.625 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:53.625 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:53.625 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:53.885 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:53.885 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:53.885 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:54.143 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:54.143 13:06:12 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:54.143 13:06:12 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:54.143 13:06:12 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:54.143 13:06:12 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:54.143 13:06:12 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.143 13:06:12 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:54.143 13:06:12 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.677 13:06:14 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:56.677 00:26:56.677 real 1m7.149s 00:26:56.677 user 6m30.407s 00:26:56.677 sys 0m20.445s 00:26:56.677 13:06:14 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:56.677 13:06:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:56.677 ************************************ 00:26:56.677 END TEST nvmf_dif 00:26:56.677 ************************************ 00:26:56.677 13:06:14 -- common/autotest_common.sh@1142 -- # return 0 00:26:56.677 13:06:14 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:56.677 13:06:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:56.677 13:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.677 13:06:14 -- common/autotest_common.sh@10 -- # set +x 00:26:56.677 ************************************ 00:26:56.677 START TEST nvmf_abort_qd_sizes 00:26:56.677 ************************************ 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:56.677 * Looking for test storage... 00:26:56.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.677 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.678 13:06:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:58.581 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:58.582 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:58.582 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:58.582 Found net devices under 0000:84:00.0: cvl_0_0 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:58.582 Found net devices under 0000:84:00.1: cvl_0_1 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:26:58.582 00:26:58.582 --- 10.0.0.2 ping statistics --- 00:26:58.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.582 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:26:58.582 00:26:58.582 --- 10.0.0.1 ping statistics --- 00:26:58.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.582 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:58.582 13:06:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:59.516 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:59.516 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:59.773 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:59.773 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:00.711 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3519849 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3519849 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3519849 ']' 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:00.711 13:06:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:00.970 [2024-07-15 13:06:18.957446] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:00.970 [2024-07-15 13:06:18.957541] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.970 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.970 [2024-07-15 13:06:19.021326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.970 [2024-07-15 13:06:19.123959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.970 [2024-07-15 13:06:19.124016] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.970 [2024-07-15 13:06:19.124029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.970 [2024-07-15 13:06:19.124039] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.970 [2024-07-15 13:06:19.124048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.970 [2024-07-15 13:06:19.124130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.970 [2024-07-15 13:06:19.124237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.970 [2024-07-15 13:06:19.124335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.970 [2024-07-15 13:06:19.124342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.228 13:06:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:01.228 ************************************ 00:27:01.228 START TEST spdk_target_abort 00:27:01.228 ************************************ 00:27:01.228 13:06:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:01.228 13:06:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:01.228 13:06:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:27:01.228 13:06:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.228 13:06:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 spdk_targetn1 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 [2024-07-15 13:06:22.145115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:04.519 [2024-07-15 13:06:22.177335] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:04.519 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:04.520 13:06:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:04.520 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.884 Initializing NVMe Controllers 00:27:07.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:07.884 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:07.884 Initialization complete. Launching workers. 00:27:07.884 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11582, failed: 0 00:27:07.884 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1202, failed to submit 10380 00:27:07.884 success 729, unsuccess 473, failed 0 00:27:07.884 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:07.884 13:06:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:07.884 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.166 Initializing NVMe Controllers 00:27:11.166 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:11.166 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:11.166 Initialization complete. Launching workers. 00:27:11.166 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8536, failed: 0 00:27:11.167 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1223, failed to submit 7313 00:27:11.167 success 300, unsuccess 923, failed 0 00:27:11.167 13:06:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.167 13:06:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.167 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.693 Initializing NVMe Controllers 00:27:13.694 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:13.694 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:13.694 Initialization complete. Launching workers. 00:27:13.694 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31658, failed: 0 00:27:13.694 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2711, failed to submit 28947 00:27:13.694 success 547, unsuccess 2164, failed 0 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.694 13:06:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3519849 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3519849 ']' 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3519849 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3519849 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3519849' 00:27:15.073 killing process with pid 3519849 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3519849 00:27:15.073 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3519849 00:27:15.331 00:27:15.331 real 0m14.231s 00:27:15.331 user 0m53.609s 00:27:15.331 sys 0m2.880s 00:27:15.331 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.331 13:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:15.331 ************************************ 00:27:15.331 END TEST spdk_target_abort 00:27:15.331 ************************************ 00:27:15.591 13:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:15.591 13:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:15.591 13:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:15.591 13:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.591 13:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:15.591 ************************************ 00:27:15.591 START TEST kernel_target_abort 00:27:15.591 ************************************ 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:15.591 13:06:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:16.530 Waiting for block devices as requested 00:27:16.530 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:16.788 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:16.788 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.046 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:17.046 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:17.046 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:17.046 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:17.305 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:17.305 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:17.305 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:17.305 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.565 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:17.565 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:17.565 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:17.565 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:17.824 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:17.824 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:17.824 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:18.084 No valid GPT data, bailing 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:27:18.084 00:27:18.084 Discovery Log Number of Records 2, Generation counter 2 00:27:18.084 =====Discovery Log Entry 0====== 00:27:18.084 trtype: tcp 00:27:18.084 adrfam: ipv4 00:27:18.084 subtype: current discovery subsystem 00:27:18.084 treq: not specified, sq flow control disable supported 00:27:18.084 portid: 1 00:27:18.084 trsvcid: 4420 00:27:18.084 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:18.084 traddr: 10.0.0.1 00:27:18.084 eflags: none 00:27:18.084 sectype: none 00:27:18.084 =====Discovery Log Entry 1====== 00:27:18.084 trtype: tcp 00:27:18.084 adrfam: ipv4 00:27:18.084 subtype: nvme subsystem 00:27:18.084 treq: not specified, sq flow control disable supported 00:27:18.084 portid: 1 00:27:18.084 trsvcid: 4420 00:27:18.084 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:18.084 traddr: 10.0.0.1 00:27:18.084 eflags: none 00:27:18.084 sectype: none 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:18.084 13:06:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:18.084 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.376 Initializing NVMe Controllers 00:27:21.376 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:21.376 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:21.376 Initialization complete. Launching workers. 00:27:21.376 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51269, failed: 0 00:27:21.376 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 51269, failed to submit 0 00:27:21.376 success 0, unsuccess 51269, failed 0 00:27:21.376 13:06:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:21.376 13:06:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:21.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.666 Initializing NVMe Controllers 00:27:24.666 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:24.666 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:24.666 Initialization complete. Launching workers. 00:27:24.666 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95275, failed: 0 00:27:24.666 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24054, failed to submit 71221 00:27:24.666 success 0, unsuccess 24054, failed 0 00:27:24.666 13:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:24.666 13:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:24.666 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.960 Initializing NVMe Controllers 00:27:27.960 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:27.960 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:27.960 Initialization complete. Launching workers. 00:27:27.960 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92974, failed: 0 00:27:27.960 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23246, failed to submit 69728 00:27:27.960 success 0, unsuccess 23246, failed 0 00:27:27.960 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:27.960 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:27.960 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:27.960 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:27.961 13:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:28.896 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:28.896 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:28.896 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:29.835 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:29.835 00:27:29.835 real 0m14.402s 00:27:29.835 user 0m6.289s 00:27:29.835 sys 0m3.277s 00:27:29.835 13:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:29.835 13:06:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.835 ************************************ 00:27:29.835 END TEST kernel_target_abort 00:27:29.835 ************************************ 00:27:29.835 13:06:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:29.835 13:06:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:29.835 13:06:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:29.835 13:06:47 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:29.835 13:06:47 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:29.835 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:29.835 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:29.835 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:29.835 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:29.835 rmmod nvme_tcp 00:27:29.835 rmmod nvme_fabrics 00:27:29.835 rmmod nvme_keyring 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3519849 ']' 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3519849 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3519849 ']' 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3519849 00:27:30.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3519849) - No such process 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3519849 is not found' 00:27:30.095 Process with pid 3519849 is not found 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:30.095 13:06:48 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:31.035 Waiting for block devices as requested 00:27:31.035 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:27:31.035 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:31.293 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:31.293 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:31.293 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:31.551 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:31.551 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:31.551 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:31.551 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:31.810 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:31.810 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:31.810 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:32.070 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:32.070 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:32.070 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:32.070 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:32.329 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:32.329 13:06:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.855 13:06:52 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.855 00:27:34.855 real 0m38.158s 00:27:34.855 user 1m2.034s 00:27:34.855 sys 0m9.495s 00:27:34.855 13:06:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.856 13:06:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:34.856 ************************************ 00:27:34.856 END TEST nvmf_abort_qd_sizes 00:27:34.856 ************************************ 00:27:34.856 13:06:52 -- common/autotest_common.sh@1142 -- # return 0 00:27:34.856 13:06:52 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:34.856 13:06:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:34.856 13:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.856 13:06:52 -- common/autotest_common.sh@10 -- # set +x 00:27:34.856 ************************************ 00:27:34.856 START TEST keyring_file 00:27:34.856 ************************************ 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:34.856 * Looking for test storage... 00:27:34.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.856 13:06:52 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.856 13:06:52 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.856 13:06:52 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.856 13:06:52 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.856 13:06:52 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.856 13:06:52 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.856 13:06:52 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:34.856 13:06:52 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DFIrkdR8tT 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DFIrkdR8tT 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DFIrkdR8tT 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DFIrkdR8tT 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0kgiddyWq5 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:34.856 13:06:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0kgiddyWq5 00:27:34.856 13:06:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0kgiddyWq5 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0kgiddyWq5 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@30 -- # tgtpid=3525630 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:34.856 13:06:52 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3525630 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3525630 ']' 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.856 13:06:52 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:34.856 [2024-07-15 13:06:52.718898] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:34.856 [2024-07-15 13:06:52.718983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525630 ] 00:27:34.856 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.856 [2024-07-15 13:06:52.779290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.856 [2024-07-15 13:06:52.885544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:35.115 13:06:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:35.115 [2024-07-15 13:06:53.105628] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.115 null0 00:27:35.115 [2024-07-15 13:06:53.137675] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:35.115 [2024-07-15 13:06:53.138195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:35.115 [2024-07-15 13:06:53.145684] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.115 13:06:53 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:35.115 [2024-07-15 13:06:53.157706] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:35.115 request: 00:27:35.115 { 00:27:35.115 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.115 "secure_channel": false, 00:27:35.115 "listen_address": { 00:27:35.115 "trtype": "tcp", 00:27:35.115 "traddr": "127.0.0.1", 00:27:35.115 "trsvcid": "4420" 00:27:35.115 }, 00:27:35.115 "method": "nvmf_subsystem_add_listener", 00:27:35.115 "req_id": 1 00:27:35.115 } 00:27:35.115 Got JSON-RPC error response 00:27:35.115 response: 00:27:35.115 { 00:27:35.115 "code": -32602, 00:27:35.115 "message": "Invalid parameters" 00:27:35.115 } 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.115 13:06:53 keyring_file -- keyring/file.sh@46 -- # bperfpid=3525641 00:27:35.115 13:06:53 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:35.115 13:06:53 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3525641 /var/tmp/bperf.sock 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3525641 ']' 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.115 13:06:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:35.115 [2024-07-15 13:06:53.203119] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:35.115 [2024-07-15 13:06:53.203184] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3525641 ] 00:27:35.115 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.115 [2024-07-15 13:06:53.259034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.374 [2024-07-15 13:06:53.365348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.374 13:06:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.374 13:06:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:35.374 13:06:53 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:35.374 13:06:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:35.631 13:06:53 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0kgiddyWq5 00:27:35.631 13:06:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0kgiddyWq5 00:27:35.937 13:06:53 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:35.937 13:06:53 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:35.937 13:06:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.937 13:06:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.937 13:06:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.222 13:06:54 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.DFIrkdR8tT == \/\t\m\p\/\t\m\p\.\D\F\I\r\k\d\R\8\t\T ]] 00:27:36.222 13:06:54 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:36.222 13:06:54 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:36.222 13:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.222 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.222 13:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:36.481 13:06:54 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0kgiddyWq5 == \/\t\m\p\/\t\m\p\.\0\k\g\i\d\d\y\W\q\5 ]] 00:27:36.481 13:06:54 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:36.481 13:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.481 13:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.481 13:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.481 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.481 13:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.739 13:06:54 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:36.739 13:06:54 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:36.739 13:06:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:36.739 13:06:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.739 13:06:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.739 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.739 13:06:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:36.997 13:06:54 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:36.998 13:06:54 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.998 13:06:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.256 [2024-07-15 13:06:55.213712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:37.256 nvme0n1 00:27:37.256 13:06:55 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:37.256 13:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.256 13:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.256 13:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.256 13:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.256 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.514 13:06:55 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:37.514 13:06:55 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:37.514 13:06:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:37.514 13:06:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.514 13:06:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.514 13:06:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:37.514 13:06:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.774 13:06:55 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:37.774 13:06:55 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:37.774 Running I/O for 1 seconds... 00:27:39.151 00:27:39.151 Latency(us) 00:27:39.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.151 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:39.151 nvme0n1 : 1.01 9029.95 35.27 0.00 0.00 14112.79 4563.25 20291.89 00:27:39.151 =================================================================================================================== 00:27:39.151 Total : 9029.95 35.27 0.00 0.00 14112.79 4563.25 20291.89 00:27:39.151 0 00:27:39.151 13:06:56 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:39.151 13:06:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:39.151 13:06:57 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:39.151 13:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.151 13:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.151 13:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.151 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.151 13:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:39.409 13:06:57 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:39.409 13:06:57 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:39.409 13:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:39.409 13:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.409 13:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.409 13:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:39.409 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.667 13:06:57 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:39.667 13:06:57 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:39.667 13:06:57 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:39.667 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:39.926 [2024-07-15 13:06:57.916534] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:39.926 [2024-07-15 13:06:57.917219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd3bd0 (107): Transport endpoint is not connected 00:27:39.926 [2024-07-15 13:06:57.918212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd3bd0 (9): Bad file descriptor 00:27:39.926 [2024-07-15 13:06:57.919211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:39.926 [2024-07-15 13:06:57.919231] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:39.926 [2024-07-15 13:06:57.919244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:39.926 request: 00:27:39.926 { 00:27:39.926 "name": "nvme0", 00:27:39.926 "trtype": "tcp", 00:27:39.926 "traddr": "127.0.0.1", 00:27:39.926 "adrfam": "ipv4", 00:27:39.926 "trsvcid": "4420", 00:27:39.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.926 "prchk_reftag": false, 00:27:39.926 "prchk_guard": false, 00:27:39.926 "hdgst": false, 00:27:39.926 "ddgst": false, 00:27:39.926 "psk": "key1", 00:27:39.926 "method": "bdev_nvme_attach_controller", 00:27:39.926 "req_id": 1 00:27:39.926 } 00:27:39.926 Got JSON-RPC error response 00:27:39.926 response: 00:27:39.926 { 00:27:39.926 "code": -5, 00:27:39.926 "message": "Input/output error" 00:27:39.926 } 00:27:39.926 13:06:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:39.926 13:06:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:39.926 13:06:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:39.926 13:06:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:39.926 13:06:57 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:39.926 13:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:39.926 13:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:39.926 13:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:39.926 13:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:39.926 13:06:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.184 13:06:58 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:40.184 13:06:58 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:40.184 13:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:40.184 13:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:40.184 13:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:40.184 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.184 13:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:40.442 13:06:58 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:40.442 13:06:58 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:40.442 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:40.700 13:06:58 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:40.700 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:40.959 13:06:58 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:40.959 13:06:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.959 13:06:58 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:41.219 13:06:59 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:41.219 13:06:59 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.DFIrkdR8tT 00:27:41.219 13:06:59 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.219 13:06:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.219 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.219 [2024-07-15 13:06:59.421318] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DFIrkdR8tT': 0100660 00:27:41.219 [2024-07-15 13:06:59.421379] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:41.219 request: 00:27:41.219 { 00:27:41.219 "name": "key0", 00:27:41.219 "path": "/tmp/tmp.DFIrkdR8tT", 00:27:41.219 "method": "keyring_file_add_key", 00:27:41.219 "req_id": 1 00:27:41.219 } 00:27:41.220 Got JSON-RPC error response 00:27:41.220 response: 00:27:41.220 { 00:27:41.220 "code": -1, 00:27:41.220 "message": "Operation not permitted" 00:27:41.220 } 00:27:41.479 13:06:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:41.479 13:06:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.479 13:06:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:41.479 13:06:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.479 13:06:59 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.DFIrkdR8tT 00:27:41.479 13:06:59 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.479 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DFIrkdR8tT 00:27:41.479 13:06:59 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.DFIrkdR8tT 00:27:41.737 13:06:59 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.737 13:06:59 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:41.737 13:06:59 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:41.737 13:06:59 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.737 13:06:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:41.995 [2024-07-15 13:07:00.163359] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DFIrkdR8tT': No such file or directory 00:27:41.995 [2024-07-15 13:07:00.163406] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:41.995 [2024-07-15 13:07:00.163434] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:41.995 [2024-07-15 13:07:00.163445] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:41.995 [2024-07-15 13:07:00.163457] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:41.995 request: 00:27:41.995 { 00:27:41.995 "name": "nvme0", 00:27:41.995 "trtype": "tcp", 00:27:41.995 "traddr": "127.0.0.1", 00:27:41.995 "adrfam": "ipv4", 00:27:41.995 "trsvcid": "4420", 00:27:41.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:41.995 "prchk_reftag": false, 00:27:41.995 "prchk_guard": false, 00:27:41.995 "hdgst": false, 00:27:41.995 "ddgst": false, 00:27:41.995 "psk": "key0", 00:27:41.995 "method": "bdev_nvme_attach_controller", 00:27:41.995 "req_id": 1 00:27:41.995 } 00:27:41.995 Got JSON-RPC error response 00:27:41.995 response: 00:27:41.995 { 00:27:41.995 "code": -19, 00:27:41.995 "message": "No such device" 00:27:41.995 } 00:27:41.995 13:07:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:41.995 13:07:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:41.995 13:07:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:41.995 13:07:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:41.995 13:07:00 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:41.995 13:07:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:42.255 13:07:00 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BCiQbT7bZt 00:27:42.255 13:07:00 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:42.255 13:07:00 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:42.513 13:07:00 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BCiQbT7bZt 00:27:42.513 13:07:00 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BCiQbT7bZt 00:27:42.513 13:07:00 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.BCiQbT7bZt 00:27:42.513 13:07:00 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BCiQbT7bZt 00:27:42.513 13:07:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BCiQbT7bZt 00:27:42.773 13:07:00 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:42.773 13:07:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:43.030 nvme0n1 00:27:43.030 13:07:01 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:43.030 13:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:43.030 13:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.030 13:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.030 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.030 13:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.288 13:07:01 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:43.288 13:07:01 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:43.288 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:43.545 13:07:01 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:43.545 13:07:01 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:43.545 13:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.545 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:43.545 13:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.802 13:07:01 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:43.802 13:07:01 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:43.802 13:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:43.802 13:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:43.802 13:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:43.802 13:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:43.802 13:07:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.060 13:07:02 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:44.060 13:07:02 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:44.060 13:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:44.317 13:07:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:44.317 13:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.317 13:07:02 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:44.317 13:07:02 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:44.575 13:07:02 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.BCiQbT7bZt 00:27:44.575 13:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.BCiQbT7bZt 00:27:44.833 13:07:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0kgiddyWq5 00:27:44.834 13:07:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0kgiddyWq5 00:27:44.834 13:07:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:44.834 13:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:45.407 nvme0n1 00:27:45.407 13:07:03 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:45.407 13:07:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:45.666 13:07:03 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:45.666 "subsystems": [ 00:27:45.666 { 00:27:45.666 "subsystem": "keyring", 00:27:45.666 "config": [ 00:27:45.666 { 00:27:45.666 "method": "keyring_file_add_key", 00:27:45.666 "params": { 00:27:45.666 "name": "key0", 00:27:45.666 "path": "/tmp/tmp.BCiQbT7bZt" 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "keyring_file_add_key", 00:27:45.666 "params": { 00:27:45.666 "name": "key1", 00:27:45.666 "path": "/tmp/tmp.0kgiddyWq5" 00:27:45.666 } 00:27:45.666 } 00:27:45.666 ] 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "subsystem": "iobuf", 00:27:45.666 "config": [ 00:27:45.666 { 00:27:45.666 "method": "iobuf_set_options", 00:27:45.666 "params": { 00:27:45.666 "small_pool_count": 8192, 00:27:45.666 "large_pool_count": 1024, 00:27:45.666 "small_bufsize": 8192, 00:27:45.666 "large_bufsize": 135168 00:27:45.666 } 00:27:45.666 } 00:27:45.666 ] 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "subsystem": "sock", 00:27:45.666 "config": [ 00:27:45.666 { 00:27:45.666 "method": "sock_set_default_impl", 00:27:45.666 "params": { 00:27:45.666 "impl_name": "posix" 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "sock_impl_set_options", 00:27:45.666 "params": { 00:27:45.666 "impl_name": "ssl", 00:27:45.666 "recv_buf_size": 4096, 00:27:45.666 "send_buf_size": 4096, 00:27:45.666 "enable_recv_pipe": true, 00:27:45.666 "enable_quickack": false, 00:27:45.666 "enable_placement_id": 0, 00:27:45.666 "enable_zerocopy_send_server": true, 00:27:45.666 "enable_zerocopy_send_client": false, 00:27:45.666 "zerocopy_threshold": 0, 00:27:45.666 "tls_version": 0, 00:27:45.666 "enable_ktls": false 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "sock_impl_set_options", 00:27:45.666 "params": { 00:27:45.666 "impl_name": "posix", 00:27:45.666 "recv_buf_size": 2097152, 00:27:45.666 "send_buf_size": 2097152, 00:27:45.666 "enable_recv_pipe": true, 00:27:45.666 "enable_quickack": false, 00:27:45.666 "enable_placement_id": 0, 00:27:45.666 "enable_zerocopy_send_server": true, 00:27:45.666 "enable_zerocopy_send_client": false, 00:27:45.666 "zerocopy_threshold": 0, 00:27:45.666 "tls_version": 0, 00:27:45.666 "enable_ktls": false 00:27:45.666 } 00:27:45.666 } 00:27:45.666 ] 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "subsystem": "vmd", 00:27:45.666 "config": [] 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "subsystem": "accel", 00:27:45.666 "config": [ 00:27:45.666 { 00:27:45.666 "method": "accel_set_options", 00:27:45.666 "params": { 00:27:45.666 "small_cache_size": 128, 00:27:45.666 "large_cache_size": 16, 00:27:45.666 "task_count": 2048, 00:27:45.666 "sequence_count": 2048, 00:27:45.666 "buf_count": 2048 00:27:45.666 } 00:27:45.666 } 00:27:45.666 ] 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "subsystem": "bdev", 00:27:45.666 "config": [ 00:27:45.666 { 00:27:45.666 "method": "bdev_set_options", 00:27:45.666 "params": { 00:27:45.666 "bdev_io_pool_size": 65535, 00:27:45.666 "bdev_io_cache_size": 256, 00:27:45.666 "bdev_auto_examine": true, 00:27:45.666 "iobuf_small_cache_size": 128, 00:27:45.666 "iobuf_large_cache_size": 16 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "bdev_raid_set_options", 00:27:45.666 "params": { 00:27:45.666 "process_window_size_kb": 1024 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "bdev_iscsi_set_options", 00:27:45.666 "params": { 00:27:45.666 "timeout_sec": 30 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "bdev_nvme_set_options", 00:27:45.666 "params": { 00:27:45.666 "action_on_timeout": "none", 00:27:45.666 "timeout_us": 0, 00:27:45.666 "timeout_admin_us": 0, 00:27:45.666 "keep_alive_timeout_ms": 10000, 00:27:45.666 "arbitration_burst": 0, 00:27:45.666 "low_priority_weight": 0, 00:27:45.666 "medium_priority_weight": 0, 00:27:45.666 "high_priority_weight": 0, 00:27:45.666 "nvme_adminq_poll_period_us": 10000, 00:27:45.666 "nvme_ioq_poll_period_us": 0, 00:27:45.666 "io_queue_requests": 512, 00:27:45.666 "delay_cmd_submit": true, 00:27:45.666 "transport_retry_count": 4, 00:27:45.666 "bdev_retry_count": 3, 00:27:45.666 "transport_ack_timeout": 0, 00:27:45.666 "ctrlr_loss_timeout_sec": 0, 00:27:45.666 "reconnect_delay_sec": 0, 00:27:45.666 "fast_io_fail_timeout_sec": 0, 00:27:45.666 "disable_auto_failback": false, 00:27:45.666 "generate_uuids": false, 00:27:45.666 "transport_tos": 0, 00:27:45.666 "nvme_error_stat": false, 00:27:45.666 "rdma_srq_size": 0, 00:27:45.666 "io_path_stat": false, 00:27:45.666 "allow_accel_sequence": false, 00:27:45.666 "rdma_max_cq_size": 0, 00:27:45.666 "rdma_cm_event_timeout_ms": 0, 00:27:45.666 "dhchap_digests": [ 00:27:45.666 "sha256", 00:27:45.666 "sha384", 00:27:45.666 "sha512" 00:27:45.666 ], 00:27:45.666 "dhchap_dhgroups": [ 00:27:45.666 "null", 00:27:45.666 "ffdhe2048", 00:27:45.666 "ffdhe3072", 00:27:45.666 "ffdhe4096", 00:27:45.666 "ffdhe6144", 00:27:45.666 "ffdhe8192" 00:27:45.666 ] 00:27:45.666 } 00:27:45.666 }, 00:27:45.666 { 00:27:45.666 "method": "bdev_nvme_attach_controller", 00:27:45.666 "params": { 00:27:45.666 "name": "nvme0", 00:27:45.666 "trtype": "TCP", 00:27:45.666 "adrfam": "IPv4", 00:27:45.666 "traddr": "127.0.0.1", 00:27:45.666 "trsvcid": "4420", 00:27:45.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.666 "prchk_reftag": false, 00:27:45.666 "prchk_guard": false, 00:27:45.666 "ctrlr_loss_timeout_sec": 0, 00:27:45.666 "reconnect_delay_sec": 0, 00:27:45.666 "fast_io_fail_timeout_sec": 0, 00:27:45.666 "psk": "key0", 00:27:45.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.667 "hdgst": false, 00:27:45.667 "ddgst": false 00:27:45.667 } 00:27:45.667 }, 00:27:45.667 { 00:27:45.667 "method": "bdev_nvme_set_hotplug", 00:27:45.667 "params": { 00:27:45.667 "period_us": 100000, 00:27:45.667 "enable": false 00:27:45.667 } 00:27:45.667 }, 00:27:45.667 { 00:27:45.667 "method": "bdev_wait_for_examine" 00:27:45.667 } 00:27:45.667 ] 00:27:45.667 }, 00:27:45.667 { 00:27:45.667 "subsystem": "nbd", 00:27:45.667 "config": [] 00:27:45.667 } 00:27:45.667 ] 00:27:45.667 }' 00:27:45.667 13:07:03 keyring_file -- keyring/file.sh@114 -- # killprocess 3525641 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3525641 ']' 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3525641 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525641 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525641' 00:27:45.667 killing process with pid 3525641 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@967 -- # kill 3525641 00:27:45.667 Received shutdown signal, test time was about 1.000000 seconds 00:27:45.667 00:27:45.667 Latency(us) 00:27:45.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.667 =================================================================================================================== 00:27:45.667 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:45.667 13:07:03 keyring_file -- common/autotest_common.sh@972 -- # wait 3525641 00:27:45.925 13:07:03 keyring_file -- keyring/file.sh@117 -- # bperfpid=3527098 00:27:45.925 13:07:03 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3527098 /var/tmp/bperf.sock 00:27:45.925 13:07:03 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3527098 ']' 00:27:45.925 13:07:03 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.925 13:07:03 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:45.925 13:07:03 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.925 13:07:03 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.925 13:07:03 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:45.925 "subsystems": [ 00:27:45.925 { 00:27:45.925 "subsystem": "keyring", 00:27:45.925 "config": [ 00:27:45.925 { 00:27:45.925 "method": "keyring_file_add_key", 00:27:45.925 "params": { 00:27:45.925 "name": "key0", 00:27:45.925 "path": "/tmp/tmp.BCiQbT7bZt" 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "keyring_file_add_key", 00:27:45.925 "params": { 00:27:45.925 "name": "key1", 00:27:45.925 "path": "/tmp/tmp.0kgiddyWq5" 00:27:45.925 } 00:27:45.925 } 00:27:45.925 ] 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "subsystem": "iobuf", 00:27:45.925 "config": [ 00:27:45.925 { 00:27:45.925 "method": "iobuf_set_options", 00:27:45.925 "params": { 00:27:45.925 "small_pool_count": 8192, 00:27:45.925 "large_pool_count": 1024, 00:27:45.925 "small_bufsize": 8192, 00:27:45.925 "large_bufsize": 135168 00:27:45.925 } 00:27:45.925 } 00:27:45.925 ] 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "subsystem": "sock", 00:27:45.925 "config": [ 00:27:45.925 { 00:27:45.925 "method": "sock_set_default_impl", 00:27:45.925 "params": { 00:27:45.925 "impl_name": "posix" 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "sock_impl_set_options", 00:27:45.925 "params": { 00:27:45.925 "impl_name": "ssl", 00:27:45.925 "recv_buf_size": 4096, 00:27:45.925 "send_buf_size": 4096, 00:27:45.925 "enable_recv_pipe": true, 00:27:45.925 "enable_quickack": false, 00:27:45.925 "enable_placement_id": 0, 00:27:45.925 "enable_zerocopy_send_server": true, 00:27:45.925 "enable_zerocopy_send_client": false, 00:27:45.925 "zerocopy_threshold": 0, 00:27:45.925 "tls_version": 0, 00:27:45.925 "enable_ktls": false 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "sock_impl_set_options", 00:27:45.925 "params": { 00:27:45.925 "impl_name": "posix", 00:27:45.925 "recv_buf_size": 2097152, 00:27:45.925 "send_buf_size": 2097152, 00:27:45.925 "enable_recv_pipe": true, 00:27:45.925 "enable_quickack": false, 00:27:45.925 "enable_placement_id": 0, 00:27:45.925 "enable_zerocopy_send_server": true, 00:27:45.925 "enable_zerocopy_send_client": false, 00:27:45.925 "zerocopy_threshold": 0, 00:27:45.925 "tls_version": 0, 00:27:45.925 "enable_ktls": false 00:27:45.925 } 00:27:45.925 } 00:27:45.925 ] 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "subsystem": "vmd", 00:27:45.925 "config": [] 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "subsystem": "accel", 00:27:45.925 "config": [ 00:27:45.925 { 00:27:45.925 "method": "accel_set_options", 00:27:45.925 "params": { 00:27:45.925 "small_cache_size": 128, 00:27:45.925 "large_cache_size": 16, 00:27:45.925 "task_count": 2048, 00:27:45.925 "sequence_count": 2048, 00:27:45.925 "buf_count": 2048 00:27:45.925 } 00:27:45.925 } 00:27:45.925 ] 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "subsystem": "bdev", 00:27:45.925 "config": [ 00:27:45.925 { 00:27:45.925 "method": "bdev_set_options", 00:27:45.925 "params": { 00:27:45.925 "bdev_io_pool_size": 65535, 00:27:45.925 "bdev_io_cache_size": 256, 00:27:45.925 "bdev_auto_examine": true, 00:27:45.925 "iobuf_small_cache_size": 128, 00:27:45.925 "iobuf_large_cache_size": 16 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "bdev_raid_set_options", 00:27:45.925 "params": { 00:27:45.925 "process_window_size_kb": 1024 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "bdev_iscsi_set_options", 00:27:45.925 "params": { 00:27:45.925 "timeout_sec": 30 00:27:45.925 } 00:27:45.925 }, 00:27:45.925 { 00:27:45.925 "method": "bdev_nvme_set_options", 00:27:45.925 "params": { 00:27:45.925 "action_on_timeout": "none", 00:27:45.925 "timeout_us": 0, 00:27:45.925 "timeout_admin_us": 0, 00:27:45.925 "keep_alive_timeout_ms": 10000, 00:27:45.925 "arbitration_burst": 0, 00:27:45.925 "low_priority_weight": 0, 00:27:45.925 "medium_priority_weight": 0, 00:27:45.926 "high_priority_weight": 0, 00:27:45.926 "nvme_adminq_poll_period_us": 10000, 00:27:45.926 "nvme_ioq_poll_period_us": 0, 00:27:45.926 "io_queue_requests": 512, 00:27:45.926 "delay_cmd_submit": true, 00:27:45.926 "transport_retry_count": 4, 00:27:45.926 "bdev_retry_count": 3, 00:27:45.926 "transport_ack_timeout": 0, 00:27:45.926 "ctrlr_loss_timeout_sec": 0, 00:27:45.926 "reconnect_delay_sec": 0, 00:27:45.926 "fast_io_fail_timeout_sec": 0, 00:27:45.926 "disable_auto_failback": false, 00:27:45.926 "generate_uuids": false, 00:27:45.926 "transport_tos": 0, 00:27:45.926 "nvme_error_stat": false, 00:27:45.926 "rdma_srq_size": 0, 00:27:45.926 "io_path_stat": false, 00:27:45.926 "allow_accel_sequence": false, 00:27:45.926 "rdma_max_cq_size": 0, 00:27:45.926 "rdma_cm_event_timeout_ms": 0, 00:27:45.926 "dhchap_digests": [ 00:27:45.926 "sha256", 00:27:45.926 "sha384", 00:27:45.926 "sha512" 00:27:45.926 ], 00:27:45.926 "dhchap_dhgroups": [ 00:27:45.926 "null", 00:27:45.926 "ffdhe2048", 00:27:45.926 "ffdhe3072", 00:27:45.926 "ffdhe4096", 00:27:45.926 "ffdhe6144", 00:27:45.926 "ffdhe8192" 00:27:45.926 ] 00:27:45.926 } 00:27:45.926 }, 00:27:45.926 { 00:27:45.926 "method": "bdev_nvme_attach_controller", 00:27:45.926 "params": { 00:27:45.926 "name": "nvme0", 00:27:45.926 "trtype": "TCP", 00:27:45.926 "adrfam": "IPv4", 00:27:45.926 "traddr": "127.0.0.1", 00:27:45.926 "trsvcid": "4420", 00:27:45.926 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.926 "prchk_reftag": false, 00:27:45.926 "prchk_guard": false, 00:27:45.926 "ctrlr_loss_timeout_sec": 0, 00:27:45.926 "reconnect_delay_sec": 0, 00:27:45.926 "fast_io_fail_timeout_sec": 0, 00:27:45.926 "psk": "key0", 00:27:45.926 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.926 "hdgst": false, 00:27:45.926 "ddgst": false 00:27:45.926 } 00:27:45.926 }, 00:27:45.926 { 00:27:45.926 "method": "bdev_nvme_set_hotplug", 00:27:45.926 "params": { 00:27:45.926 "period_us": 100000, 00:27:45.926 "enable": false 00:27:45.926 } 00:27:45.926 }, 00:27:45.926 { 00:27:45.926 "method": "bdev_wait_for_examine" 00:27:45.926 } 00:27:45.926 ] 00:27:45.926 }, 00:27:45.926 { 00:27:45.926 "subsystem": "nbd", 00:27:45.926 "config": [] 00:27:45.926 } 00:27:45.926 ] 00:27:45.926 }' 00:27:45.926 13:07:03 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.926 13:07:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:45.926 [2024-07-15 13:07:03.956271] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:45.926 [2024-07-15 13:07:03.956362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527098 ] 00:27:45.926 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.926 [2024-07-15 13:07:04.015232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.926 [2024-07-15 13:07:04.120073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.183 [2024-07-15 13:07:04.294085] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:46.748 13:07:04 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.748 13:07:04 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:46.748 13:07:04 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:46.748 13:07:04 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:46.748 13:07:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.005 13:07:05 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:47.005 13:07:05 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:47.005 13:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:47.005 13:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.005 13:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.005 13:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:47.005 13:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.262 13:07:05 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:47.262 13:07:05 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:47.262 13:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:47.262 13:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:47.262 13:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:47.262 13:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:47.262 13:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.520 13:07:05 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:47.520 13:07:05 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:47.520 13:07:05 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:47.520 13:07:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:47.777 13:07:05 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:47.777 13:07:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:47.777 13:07:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.BCiQbT7bZt /tmp/tmp.0kgiddyWq5 00:27:47.777 13:07:05 keyring_file -- keyring/file.sh@20 -- # killprocess 3527098 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3527098 ']' 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3527098 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3527098 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3527098' 00:27:47.777 killing process with pid 3527098 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@967 -- # kill 3527098 00:27:47.777 13:07:05 keyring_file -- common/autotest_common.sh@972 -- # wait 3527098 00:27:47.777 Received shutdown signal, test time was about 1.000000 seconds 00:27:47.777 00:27:47.777 Latency(us) 00:27:47.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.777 =================================================================================================================== 00:27:47.777 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:48.036 13:07:06 keyring_file -- keyring/file.sh@21 -- # killprocess 3525630 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3525630 ']' 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3525630 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3525630 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3525630' 00:27:48.036 killing process with pid 3525630 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@967 -- # kill 3525630 00:27:48.036 [2024-07-15 13:07:06.192008] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:48.036 13:07:06 keyring_file -- common/autotest_common.sh@972 -- # wait 3525630 00:27:48.602 00:27:48.602 real 0m14.065s 00:27:48.602 user 0m35.197s 00:27:48.602 sys 0m3.229s 00:27:48.602 13:07:06 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.602 13:07:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:48.602 ************************************ 00:27:48.602 END TEST keyring_file 00:27:48.602 ************************************ 00:27:48.602 13:07:06 -- common/autotest_common.sh@1142 -- # return 0 00:27:48.602 13:07:06 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:48.602 13:07:06 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:48.602 13:07:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:48.602 13:07:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:48.602 13:07:06 -- common/autotest_common.sh@10 -- # set +x 00:27:48.602 ************************************ 00:27:48.602 START TEST keyring_linux 00:27:48.602 ************************************ 00:27:48.602 13:07:06 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:48.602 * Looking for test storage... 00:27:48.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:48.602 13:07:06 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.602 13:07:06 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.602 13:07:06 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.602 13:07:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.602 13:07:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.602 13:07:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.602 13:07:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:48.602 13:07:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:48.602 13:07:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:48.602 /tmp/:spdk-test:key0 00:27:48.602 13:07:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:48.602 13:07:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:48.603 13:07:06 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:48.603 13:07:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:48.603 /tmp/:spdk-test:key1 00:27:48.603 13:07:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3527464 00:27:48.603 13:07:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:48.603 13:07:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3527464 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3527464 ']' 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:48.603 13:07:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.862 [2024-07-15 13:07:06.832455] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:48.862 [2024-07-15 13:07:06.832545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527464 ] 00:27:48.862 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.862 [2024-07-15 13:07:06.894194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.862 [2024-07-15 13:07:07.005508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:49.120 [2024-07-15 13:07:07.255805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:49.120 null0 00:27:49.120 [2024-07-15 13:07:07.287852] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:49.120 [2024-07-15 13:07:07.288347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:49.120 91956125 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:49.120 249300650 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3527589 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:49.120 13:07:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3527589 /var/tmp/bperf.sock 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3527589 ']' 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:49.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:49.120 13:07:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:49.378 [2024-07-15 13:07:07.351272] Starting SPDK v24.09-pre git sha1 6151edad3 / DPDK 24.03.0 initialization... 00:27:49.378 [2024-07-15 13:07:07.351334] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527589 ] 00:27:49.378 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.378 [2024-07-15 13:07:07.407455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.378 [2024-07-15 13:07:07.511662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.378 13:07:07 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:49.378 13:07:07 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:49.378 13:07:07 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:49.378 13:07:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:49.635 13:07:07 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:49.635 13:07:07 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:50.200 13:07:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:50.200 13:07:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:50.200 [2024-07-15 13:07:08.354333] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:50.457 nvme0n1 00:27:50.457 13:07:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:50.457 13:07:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:50.457 13:07:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:50.457 13:07:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:50.457 13:07:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:50.457 13:07:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.717 13:07:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:50.717 13:07:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:50.717 13:07:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:50.717 13:07:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:50.717 13:07:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:50.717 13:07:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:50.717 13:07:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@25 -- # sn=91956125 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@26 -- # [[ 91956125 == \9\1\9\5\6\1\2\5 ]] 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 91956125 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:50.976 13:07:08 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.976 Running I/O for 1 seconds... 00:27:51.912 00:27:51.912 Latency(us) 00:27:51.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.912 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:51.912 nvme0n1 : 1.01 9504.35 37.13 0.00 0.00 13365.50 10097.40 23787.14 00:27:51.912 =================================================================================================================== 00:27:51.912 Total : 9504.35 37.13 0.00 0.00 13365.50 10097.40 23787.14 00:27:51.912 0 00:27:51.912 13:07:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:51.912 13:07:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:52.170 13:07:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:52.170 13:07:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:52.170 13:07:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:52.170 13:07:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:52.170 13:07:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:52.170 13:07:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.426 13:07:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:52.426 13:07:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:52.426 13:07:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:52.426 13:07:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:52.426 13:07:10 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:52.426 13:07:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:52.683 [2024-07-15 13:07:10.832734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:52.683 [2024-07-15 13:07:10.832975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc70780 (107): Transport endpoint is not connected 00:27:52.683 [2024-07-15 13:07:10.833968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc70780 (9): Bad file descriptor 00:27:52.683 [2024-07-15 13:07:10.834967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:52.683 [2024-07-15 13:07:10.834988] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:52.683 [2024-07-15 13:07:10.835002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:52.683 request: 00:27:52.683 { 00:27:52.683 "name": "nvme0", 00:27:52.683 "trtype": "tcp", 00:27:52.683 "traddr": "127.0.0.1", 00:27:52.683 "adrfam": "ipv4", 00:27:52.683 "trsvcid": "4420", 00:27:52.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.683 "prchk_reftag": false, 00:27:52.683 "prchk_guard": false, 00:27:52.683 "hdgst": false, 00:27:52.683 "ddgst": false, 00:27:52.683 "psk": ":spdk-test:key1", 00:27:52.683 "method": "bdev_nvme_attach_controller", 00:27:52.683 "req_id": 1 00:27:52.683 } 00:27:52.683 Got JSON-RPC error response 00:27:52.683 response: 00:27:52.683 { 00:27:52.683 "code": -5, 00:27:52.683 "message": "Input/output error" 00:27:52.683 } 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@33 -- # sn=91956125 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 91956125 00:27:52.683 1 links removed 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@33 -- # sn=249300650 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 249300650 00:27:52.683 1 links removed 00:27:52.683 13:07:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3527589 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3527589 ']' 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3527589 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.683 13:07:10 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3527589 00:27:52.941 13:07:10 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:52.941 13:07:10 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:52.941 13:07:10 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3527589' 00:27:52.941 killing process with pid 3527589 00:27:52.941 13:07:10 keyring_linux -- common/autotest_common.sh@967 -- # kill 3527589 00:27:52.941 Received shutdown signal, test time was about 1.000000 seconds 00:27:52.941 00:27:52.941 Latency(us) 00:27:52.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.941 =================================================================================================================== 00:27:52.941 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.941 13:07:10 keyring_linux -- common/autotest_common.sh@972 -- # wait 3527589 00:27:53.199 13:07:11 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3527464 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3527464 ']' 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3527464 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3527464 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3527464' 00:27:53.199 killing process with pid 3527464 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@967 -- # kill 3527464 00:27:53.199 13:07:11 keyring_linux -- common/autotest_common.sh@972 -- # wait 3527464 00:27:53.456 00:27:53.457 real 0m4.984s 00:27:53.457 user 0m9.506s 00:27:53.457 sys 0m1.643s 00:27:53.457 13:07:11 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:53.457 13:07:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 ************************************ 00:27:53.457 END TEST keyring_linux 00:27:53.457 ************************************ 00:27:53.457 13:07:11 -- common/autotest_common.sh@1142 -- # return 0 00:27:53.457 13:07:11 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:53.457 13:07:11 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:53.457 13:07:11 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:53.457 13:07:11 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:53.457 13:07:11 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:53.457 13:07:11 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:53.457 13:07:11 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:53.457 13:07:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.457 13:07:11 -- common/autotest_common.sh@10 -- # set +x 00:27:53.457 13:07:11 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:53.457 13:07:11 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:53.457 13:07:11 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:53.457 13:07:11 -- common/autotest_common.sh@10 -- # set +x 00:27:55.356 INFO: APP EXITING 00:27:55.356 INFO: killing all VMs 00:27:55.356 INFO: killing vhost app 00:27:55.356 INFO: EXIT DONE 00:27:56.753 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:27:56.753 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:56.753 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:56.753 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:56.753 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:56.753 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:56.753 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:56.753 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:56.753 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:56.753 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:56.753 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:56.753 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:56.753 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:56.753 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:56.753 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:56.753 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:56.753 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:58.140 Cleaning 00:27:58.140 Removing: /var/run/dpdk/spdk0/config 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:58.140 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:58.140 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:58.140 Removing: /var/run/dpdk/spdk1/config 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:58.140 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:58.140 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:58.140 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:58.140 Removing: /var/run/dpdk/spdk2/config 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:58.140 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:58.140 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:58.140 Removing: /var/run/dpdk/spdk3/config 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:58.140 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:58.141 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:58.141 Removing: /var/run/dpdk/spdk4/config 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:58.141 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:58.141 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:58.141 Removing: /dev/shm/bdev_svc_trace.1 00:27:58.141 Removing: /dev/shm/nvmf_trace.0 00:27:58.141 Removing: /dev/shm/spdk_tgt_trace.pid3267226 00:27:58.141 Removing: /var/run/dpdk/spdk0 00:27:58.141 Removing: /var/run/dpdk/spdk1 00:27:58.141 Removing: /var/run/dpdk/spdk2 00:27:58.141 Removing: /var/run/dpdk/spdk3 00:27:58.141 Removing: /var/run/dpdk/spdk4 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3265680 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3266415 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3267226 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3267665 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3268360 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3268500 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3269213 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3269344 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3269532 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3270777 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3271685 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3271877 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3272068 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3272277 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3272482 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3272737 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3272891 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3273080 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3273385 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3275747 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3275918 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276087 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276091 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276518 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276531 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276948 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3276965 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3277242 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3277266 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3277428 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3277556 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3277928 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3278089 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3278364 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3278465 00:27:58.141 Removing: /var/run/dpdk/spdk_pid3278590 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3278668 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3278933 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3279092 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3279253 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3279521 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3279680 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3279841 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3280107 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3280266 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3280427 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3280701 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3280852 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3281021 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3281198 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3281449 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3281602 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3281831 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3282151 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3282313 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3282473 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3282750 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3283049 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3283487 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3285724 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3311725 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3314250 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3321338 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3325139 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3327396 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3327916 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3331911 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3335804 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3335806 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3336463 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3337113 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3337663 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3338059 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3338093 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3338318 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3338456 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3338458 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3339117 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3339659 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3340322 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3340721 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3340724 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3340984 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3341872 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3342593 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3347970 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3348248 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3350891 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3355219 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3357277 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3363716 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3369003 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3370241 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3370909 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3381165 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3383390 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3408175 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3410973 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3412161 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3413584 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3413723 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3413819 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3413884 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3414819 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3416136 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3416858 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3417170 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3418778 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3419206 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3419770 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3422303 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3428253 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3431022 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3434816 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3435888 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3436986 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3439546 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3441912 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3446040 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3446166 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3449193 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3449323 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3449465 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3450149 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3450198 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3453015 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3453350 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3456017 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3457877 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3461306 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3464644 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3471143 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3475637 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3475639 00:27:58.400 Removing: /var/run/dpdk/spdk_pid3488768 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3489180 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3489587 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3490114 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3490575 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3491101 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3491513 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3491931 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3494443 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3494696 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3498525 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3498693 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3500303 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3505357 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3505366 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3508283 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3509683 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3511090 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3511953 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3513366 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3514234 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3520216 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3520548 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3520936 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3522497 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3522892 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3523175 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3525630 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3525641 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3527098 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3527464 00:27:58.659 Removing: /var/run/dpdk/spdk_pid3527589 00:27:58.659 Clean 00:27:58.659 13:07:16 -- common/autotest_common.sh@1451 -- # return 0 00:27:58.659 13:07:16 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:58.659 13:07:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.659 13:07:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:07:16 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:58.659 13:07:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.659 13:07:16 -- common/autotest_common.sh@10 -- # set +x 00:27:58.659 13:07:16 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:58.659 13:07:16 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:58.659 13:07:16 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:58.659 13:07:16 -- spdk/autotest.sh@391 -- # hash lcov 00:27:58.659 13:07:16 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:58.659 13:07:16 -- spdk/autotest.sh@393 -- # hostname 00:27:58.659 13:07:16 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:58.918 geninfo: WARNING: invalid characters removed from testname! 00:28:30.990 13:07:45 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:30.990 13:07:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:34.276 13:07:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:37.555 13:07:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:40.084 13:07:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:43.361 13:08:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:45.886 13:08:03 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:45.886 13:08:03 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.886 13:08:03 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:45.886 13:08:03 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.886 13:08:03 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.886 13:08:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.886 13:08:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.886 13:08:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.886 13:08:03 -- paths/export.sh@5 -- $ export PATH 00:28:45.886 13:08:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.886 13:08:03 -- common/autobuild_common.sh@472 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:45.886 13:08:03 -- common/autobuild_common.sh@473 -- $ date +%s 00:28:45.886 13:08:03 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721041683.XXXXXX 00:28:45.886 13:08:03 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721041683.V0prOb 00:28:45.886 13:08:03 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:28:45.886 13:08:03 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:28:45.886 13:08:03 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:45.887 13:08:03 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:45.887 13:08:03 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:45.887 13:08:03 -- common/autobuild_common.sh@489 -- $ get_config_params 00:28:45.887 13:08:03 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:45.887 13:08:03 -- common/autotest_common.sh@10 -- $ set +x 00:28:45.887 13:08:03 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:45.887 13:08:03 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:28:45.887 13:08:03 -- pm/common@17 -- $ local monitor 00:28:45.887 13:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.887 13:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.887 13:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.887 13:08:03 -- pm/common@21 -- $ date +%s 00:28:45.887 13:08:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.887 13:08:03 -- pm/common@21 -- $ date +%s 00:28:45.887 13:08:03 -- pm/common@25 -- $ sleep 1 00:28:45.887 13:08:03 -- pm/common@21 -- $ date +%s 00:28:45.887 13:08:03 -- pm/common@21 -- $ date +%s 00:28:45.887 13:08:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721041683 00:28:45.887 13:08:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721041683 00:28:45.887 13:08:03 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721041683 00:28:45.887 13:08:03 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.release_build.sh.1721041683 00:28:45.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721041683_collect-vmstat.pm.log 00:28:45.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721041683_collect-cpu-load.pm.log 00:28:45.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721041683_collect-cpu-temp.pm.log 00:28:45.887 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.release_build.sh.1721041683_collect-bmc-pm.bmc.pm.log 00:28:46.825 13:08:04 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:28:46.825 13:08:04 -- spdk/release_build.sh@10 -- $ [[ 0 -eq 1 ]] 00:28:46.825 13:08:04 -- spdk/release_build.sh@1 -- $ stop_monitor_resources 00:28:46.825 13:08:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:46.825 13:08:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:46.825 13:08:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.825 13:08:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:46.825 13:08:04 -- pm/common@44 -- $ pid=3537204 00:28:46.825 13:08:04 -- pm/common@50 -- $ kill -TERM 3537204 00:28:46.825 13:08:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.825 13:08:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:46.825 13:08:04 -- pm/common@44 -- $ pid=3537206 00:28:46.825 13:08:04 -- pm/common@50 -- $ kill -TERM 3537206 00:28:46.825 13:08:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.825 13:08:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:46.825 13:08:04 -- pm/common@44 -- $ pid=3537208 00:28:46.825 13:08:04 -- pm/common@50 -- $ kill -TERM 3537208 00:28:46.825 13:08:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.825 13:08:04 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:46.825 13:08:04 -- pm/common@44 -- $ pid=3537237 00:28:46.825 13:08:04 -- pm/common@50 -- $ sudo -E kill -TERM 3537237 00:28:46.825 + [[ -n 3181770 ]] 00:28:46.825 + sudo kill 3181770 00:28:46.834 [Pipeline] } 00:28:46.854 [Pipeline] // stage 00:28:46.860 [Pipeline] } 00:28:46.879 [Pipeline] // timeout 00:28:46.884 [Pipeline] } 00:28:46.903 [Pipeline] // catchError 00:28:46.909 [Pipeline] } 00:28:46.928 [Pipeline] // wrap 00:28:46.935 [Pipeline] } 00:28:46.953 [Pipeline] // catchError 00:28:46.962 [Pipeline] stage 00:28:46.964 [Pipeline] { (Epilogue) 00:28:46.981 [Pipeline] catchError 00:28:46.983 [Pipeline] { 00:28:46.999 [Pipeline] echo 00:28:47.001 Cleanup processes 00:28:47.007 [Pipeline] sh 00:28:47.296 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.296 3537354 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:47.296 3537458 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.311 [Pipeline] sh 00:28:47.597 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.597 ++ grep -v 'sudo pgrep' 00:28:47.597 ++ awk '{print $1}' 00:28:47.597 + sudo kill -9 3537354 00:28:47.612 [Pipeline] sh 00:28:47.940 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:56.088 [Pipeline] sh 00:28:56.373 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:56.373 Artifacts sizes are good 00:28:56.387 [Pipeline] archiveArtifacts 00:28:56.394 Archiving artifacts 00:28:56.590 [Pipeline] sh 00:28:56.871 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:56.885 [Pipeline] cleanWs 00:28:56.894 [WS-CLEANUP] Deleting project workspace... 00:28:56.894 [WS-CLEANUP] Deferred wipeout is used... 00:28:56.902 [WS-CLEANUP] done 00:28:56.903 [Pipeline] } 00:28:56.924 [Pipeline] // catchError 00:28:56.936 [Pipeline] sh 00:28:57.221 + logger -p user.info -t JENKINS-CI 00:28:57.231 [Pipeline] } 00:28:57.249 [Pipeline] // stage 00:28:57.256 [Pipeline] } 00:28:57.272 [Pipeline] // node 00:28:57.278 [Pipeline] End of Pipeline 00:28:57.423 Finished: SUCCESS